William Gibson, the author of the seminal cyberpunk book Neuromancer, is quoted as saying that “the future is already here — it’s just not evenly distributed”. I think that’s quite apt as many technological innovations that should be everywhere never seem to get to the places that they need to be. Indeed even when the technology is available some people will simply refuse to use it, instead preferring to do things the way that they’ve always done them because, simply, that that’s the way it’s always been done. As a child of the Y generation I have no qualms upsetting the status quo should it net me some tangible benefit.
I got thinking about this late yesterday afternoon during my usual weekly clean up of the house before the working week. There’s always been this one little innovation that I’ve admired and always wondered why it wasn’t more widespread. It’s simply the bottom of a cheap coffee mug from Ikea:
Now looking at it you could just write that off as simple artistic flair on an otherwise standard coffee cup, but you’d be dead wrong. You see those grooves on the bottom are actually designed to drain water away from the bottom of the cup when it’s upside down, I.E. when it’s in the dish washer. Of all the cups in my household this is the only one that doesn’t have a little pool of water on top of it once the dishwasher has finished. It might seem like a relatively small thing but those little grooves mean that it can dry completely on its own without having a bunch of leftover dish washing scum on top of it. It’s ingenious in its simplicity.
Everyone’s had these sort of ah-ha moments where you find something that makes you question how you’d lived without it before hand. It’s interesting because we, as a species, are highly adaptable and yet conversely we’re also quite resistant to change, at least from my anecdotal point of view. That does seem to be changing with the younger generations picking up new innovations quicker than their predecessors (like social networking and new media) so it’s quite possible that the resistance to change is one that could be overcome in time. History does have an awful habit of repeating itself however and the rapid adoption we witness now might just be an artefact of them growing up in a technological world. Only time will tell for that, however.
I get the feeling that we’re entering something of a golden era for gaming in Australia. 2 months ago we got in principle support for the R18+ rating from all the attorney generals, signalling the start of a reform process that would see Australia bring itself in line with the rest of the world. Shortly afterwards I discovered G2Play and was able to get the same games at a fraction of the price, skirting around Steam’s price gouging Australian store. You’d think then that things really couldn’t get much better for us gamers as we’ve basically had all of our demands met (even if through unofficial means) but it seems us gamers are in for more good times to come.
Just over a fortnight ago the Australian Law Reform Commission released a discussion paper on the review of the national classification scheme, the first such study done in over 20 years. What’s interesting about the discussion paper (If you’re after the cliff notes, check here) was its surprisingly level headed approach not only to games, but all current and potential future media. Indeed the report is very well aware that the past 2 decades have seen rapid changes that current legislation is just incapable of keeping up with and full reform of the system is required if it is to remain relevant. So surprising was the report that Kotaku writer Mark Serrels tracked down the chairman of the classification review, Terry Flew, and interview him on the paper.
What followed was this glorious piece (which I heartily recommend reading in its entirity):
“One of the things we were aware of from the outset taking on the inquiry,” begins Terry, “was that there was considerable dissatisfaction with the R18 classification issue – that this issue had been on the agenda for over a decade and, as you may well be aware, gamers were a very important group in making submissions to this enquiry. So we’re certainly aware of the importance of the issue.”
According to Terry, R18+ was an issue that really exemplified and exposed the difficulties of using 20 year old legislation to navigate a post-internet age.
My once PR student buddy was familiar with Flew’s work but I’d never heard of him before. Employing some rudimentary Google-fu I found that he’s been highly interested in the games and new media industry for quite some time, publishing several books on them. He was, as far back as 2005, advocating the fact that gamers are no longer the realm of the stereotypical, socially inept youngsters. Flew also asserts that gamers were one of the catalysts in popularizing new media as well due to the communities that they developed. He’s far from a games apologist however and is taking a holistic view of the current classification scheme and where it should be heading.
Flew and the discussion paper are pushing forward with the idea that our classification scheme needs to be as unified as possible, in terms of both the process of classification as well as having a singular national body responsible for media classification. Right now whilst classifications are made at the national level the enforcement is done at a state level and thus states can basically opt out or form their own classification boards (like South Australia does) leading to an inconsistent application of the classification rules. If the recommendations in the paper are followed this system would likely be abolished in favour of a truly national scheme, which I feel is to the betterment of us all. The ratings would also be unified as much as they could across media platforms, meaning that there wouldn’t be as many separate rating systems for specific media types.
One of the more interesting points of the discussion paper is the idea of co-regulation. In essence this would allow the games industry to employ their own classifiers (who I assume would be licensed/verified by the classification board) who could rate games all the way up to MA. Not only would this reduce the load on the classification board it would also demystify some of the classification process, making it more open and accountable. I think it’s a great idea and means that the Australian market won’t be as hostile to those looking to release their games here, especially if the price of classification is driven down by market forces.
With someone like Flew heading up the classification scheme reform I’ve got a really good feeling about its future and the future of the media industry in Australia. Such reform has been a long time coming not just for games but for classification system as a whole. The discussion paper is a great start and hopefully many of its recommendations make it into reality but there’s still a long way to go until we see any of them realised. With Flew at the helm though I have every confidence that these sorely needed changes will eventually be implemented and then I’ll stop blogging incessantly about it.
This blog can really be the bane of my existence sometimes. Whilst most days I’m able to rifle through a couple hundred articles in my RSS feeds and find a topic that I can blurt out a few hundred words over. However if I fail at that initial endeavour I find myself in the rather undesirable situation of not having anything that good to write about. Now this never used to be a problem I’d simply close the new post page and go about the rest of my day like nothing happened. A couple months after I decided to start blogging regularly however I found myself not being able to close the browser and move on, something was compelling me to blog.
I realised that I had just given myself OCD.
I can’t wholly blame the regular blogging dedication for my condition however as I think it’s due to a couple factors. You see I’m rather keen on hard numbers and the stats I run on this blog showed me that on the days that I don’t blog at least half of the people that usually come here simply don’t. Since the major source of visitors here are from Google I figure that’s because they kick me down a notch on off days in favour of more active content sources and it’s held true for the past couple years. Add that kind of aversion therapy to a regular habit and you’re onto a winner for developing an OCD without thinking about it. At least that’s been my experience anyway.
Interestingly though I’ve found these kind of writer’s block days that I get from time to time strongly correlate to those days that I haven’t got enough sleep. Today’s block then comes courtesy of the server that hosts this blog being a right ass again, slowing everything down to a crawl. Thus last night I was up late updating all my other blog’s WordPress installations and adding caching to them in the hopes that it’d become responsive again. It seems to have made it better but I’m still slamming the hell out of the 2 CPUs on this box, something which WordPress is notorious for doing. I’ll probably lose a few more hours on that tonight again as I try to optimize the database, which is just as fun as it sounds.
I was going to write a witty end to this, but I’ve run out of steam on this meta-rant 😛
I’m a long time MMORPG player, coming up to almost 7 years if I count correctly. I haven’t been playing any recently since I’ve had so many other games to play (and 3 still awaiting their turn) but whenever there’s a drought of good game releases I’ll usually find myself back in World of Warcraft or the new MMO of the day. During that time I believe I’ve got a good feel for the general MMORPG community as I’ve been involved in nearly every aspect of those games, from the lowly casual just looking for an hour of fun to the 3AM hardcore raider who’s friends question his sanity.
One of the bugbears of the MMORPG community has always been that of players with more in real life (IRL) cash buying their way past parts of the game that less financially well off people have had to slog through. It’s a genuine gripe as it serves to lessen the value of their in game achievements if someone can just slap down their credit card and get the same thing. Most MMORPGs strictly forbid any form of real money trading because of this, usually banning people from selling in game items and currency and shutting down accounts that are caught doing so. There are a few that condone it in a limited sense, like EVE Online that allows users to sell game time (keeping all the money in CCP’s pockets), but they are in the minority.
You’d think being a long time MMORPG player that I’d be with the community on this one but I’ll have to admit to using a real money trading (RMT) service in the past. You see back when I was just starting out in EVE Online I wasn’t terribly familiar with the games rather ruthless take on death. For the most part I had stayed in high sec space, running PVE missions and slowly building my way up to one of the sexier battleships. I eventually got it and started running missions with it and that’s when I was introduced to the world of high sec piracy. Not long after getting my shiny Megathron I lost it, along with all the cash I had plunged into it. Angry and frustrated I turned to the online ISK sellers in order to get myself back to where I was, shelling out $25 real dollars to get myself back on my feet. I went back there once more in order to get myself ahead again, but I haven’t used any RMT services since then.
To me the fact that a player can pay a token amount to get ahead doesn’t lessen the achievement any more for me, mostly because I know that despite the game developer’s best efforts it still goes on in every MMORPG. No matter how many characters they ban or currency they remove from the world there will always be a legion in waiting, ready to service those who’s credit card is more readily accessible than their free time. The best thing game developers can do in this instance then is to make sure that there are approved channels for doing so within the game so that players can easily tell who’s bought their way into success. Driving it underground just ensures that the game developers are missing out on some potential revenue, whilst the players still suffer in the same way.
You can imagine then how disappointed I was when I read how naive the World of Warcraft community was being when the release of a new pet, the Guardian Cub, that was tradeable in game sparked widespread concern that RMT was coming:
The other, major, thing which sets the Guardian Cub apart? It’s tradable. Once you’ve purchased it, you can on-sell the little guy to other characters, in exchange for in-game gold or items – and you can set the price. Sticking with the “one-time-only” theme, once you’ve handed the Guardian Cub over, he will be added to the recipient’s Companions list and cannot be re-traded again later. “Be sure to choose a master wisely,” warns Blizzard.
Costing the now-standard US$10, but tradable for almost anything you’d like, the big-eyed Guardian Cub is being heralded as the Beginning of the End – opening the door to real money trading in World of Warcraft.
Realistically you can say that RMT is already here for World of Warcraft as a quick Google search for “WoW gold” will net you dozens of sites ready, willing and able to switch out your cold hard cash for in game currency. The only difference this new pet makes to that equation is there’s now a semi-legitimate way of doing it even though most people who want the pet will just go straight to the Blizzard store to get it. Really if you’re worried about Blizzard bringing an official RMT system to World of Warcraft you should really open your eyes to the reality of the situation, it’s already happening, it’s just not Blizzard who’s doing it.
I’m not saying that RMT doesn’t have any effect at all on MMORPGs but their overall impact is realistically quite low. RMT has been around for as long as MMORPGs have been and many players will go their entire in-game lives without even noticing the impact that it has on their game of choice. Officially sanctioned methods are far and away better than their black market alternatives and opposing them is akin to sticking your fingers in your ears and pretending it doesn’t happen already. It does happen, it will continue to happen and you’d be best served by a supported method, whether you believe that or not.
The Mars rovers Spirit and Opportunity are by far one the most successful mission we’ve ever had on another planet. Designed for a total mission time of only 90 days they have gone on to outlive that deadline numerous times over and if it weren’t for an insidious soil trap they’d both still be running today. Whilst Opportunity might still be running a good 7 years after it made planet fall that doesn’t mean that it’s capable of performing all the tasks we want to do and so NASA has been busy designing a replacement rover. It’s quite something to behold and it just recently hit a very important milestone.
The next rover’s official name (dubbed Curiosity in a contest to name it, much like its predecessors) is the Mars Science Laboratory and considering its payload that’s fairly apt. Compared to the Mars Exploration Rovers it’s quite the beast being 5 times more massive and carrying 10 times the scientific payload. To put that in perspective the MSL will be about the same size as the Mini Cooper, the MERs combined would only equal it in length. Such size does present some challenges for getting it down on Mars however, but the guys at NASA have devised a really ingenious way of making sure it arrives safely.
Many are familiar with the way that the MERs made their landing on Mars. They used a combination of aero-breaking (basically parachutes) combined with inflatable bags on the outside that allowed them to bounce over the surface until they landed safely. The MSL is just too heavy for that kind of landing to work so NASA has devised a multi-stage descent that utilizes aero-breaking, retrorockets and a crane system to drop it safely on the surface. I could try and explain it to you but its far more impressive to see in video:
Compared to the way the MERs landed this does seem like an extremely overcomplicated way of landing but given the constraints it’s the best option available. NASA is stepping into unknown territory here so until the landing is confirmed I can see everyone being on tenterhooks.
Keen observers would have noticed something different about the MSL when compared to its MER cousins, most notably the distinct lack of solar panels. The MSL gets all of its power from a radioisotope thermoelectric generator (RTG), the same device that’s powered Mars landers and the extremely long lived Voyager probes. These devices work by using the heat from radioactive decay of an element, usually enriched plutonium, and generating electricity via a thermocouple. The RTG on board Curiosity will generate around 125W of power when its launched, dropping to 100W only after 14 years in service. The mission time frame is slated for just under 2 earth years so the RTG is more than up to the job and there’s the tantalizing possibility that this particular rover could be working for a very long time to come.
The MSL’s payload is simply staggering so I won’t recreate it fully here but there are a few interesting pieces that I’d like to highlight. The first is the MastCam which is a high definition camera that will sit on top of Curosity’s mast. It’s able to take 1.92 megapixel images and 10fps 720p video in true colour, something that other rovers have had to fudge with their black and white cameras with colour filters. There’s also ChemCam which has an infrared laser capable of vaporizing rock at 7 meters then analysing the resulting plasma ball, which is just plain cool (lazers, IN SPACE!).
The milestone I was hinting at earlier was that the MSL has just been sealed up in its payload faring, ready for the trip to Mars:
With its launch window opening in less than two months, the Mars Science Laboratory was matched up with its heat shield at Kennedy Space Center’s Payload Hazardous Servicing Facility on Wednesday, Oct. 5.
The completed MSL rover, a.k.a. “Curiosity,” had already been fitted onto the “back shell powered descent vehicle” — a revolutionary landing mechanism that will first deploy parachutes to slow the capsule’s descent and then use rockets to hover above the Martian surface as it carefully lowers the one-ton rover down on cables before finally launching itself away to fall at a safe distance.
The launch is scheduled to happen between November 25th and December 18th this year with the rover reaching Mars sometime in August next year. After that it will begin its 1 martian year mission, which is just a hair under 700 earth days. With the rover being fitted into the fairing now it signals that NASA has quite a good shot at hitting that launch window, especially when they’re using the tried and true ATLAS V launch system.
Curiosity really is a testament to what NASA is capable of when they put their minds to it. Everything about the new rover is boundary pushing and I’m sure that much like its predecessors it’ll continue to serve NASA and humanity long after its initial mission is completed. It’s going to be agony waiting for the landing confirmation but we’ve got a year and a long trip through space before we have to start worrying about that.
The computer (or whatever Internet capable device you happen to be viewing this on) is made up of various electronic components. For the most part these are semiconductors, devices which allow the flow of electricity but don’t do it readily, but there’s also a lot of supporting electronics that are what we call fundamental components of electronics. As almost any electrical enthusiast will tell you there are 3 such components: the resistor, capacitor and inductor each of them with their own set of properties that makes them useful in electronic circuits. There’s been speculation of a 4th fundamental component for about 40 years but before I talk about that I’ll need to give you a quick run down on what the current fundamentals properties are.
The resistor is the simplest of the lot, all it does is impede the flow of electricity. They’re quite simple devices, usually a small brown package banded by 4 or more colours which denotes just how resistive it actually is. Resistors are often used as current limiters as the amount of current that can pass through them is directly related to the voltage and level of resistance of said resistor. In essence you can think of them as narrow pathways in which electric current has to squeeze through.
Capacitors are intriguing little devices and can be best thought of as batteries. You’ve seen them if you’ve taken apart any modern device as they’re those little canister looking things attached to the main board of said device. They work by storing charge in an electrostatic field between two metal plates that’s separated by an insulating material called a dielectric. Modern day capacitors are essentially two metal plates and the dielectric rolled up into a cylinder, something which you could see if you cut one open. I’d only recommend doing this with a “solid” capacitor as the dielectrics used in other capacitors are liquids and tend to be rather toxic and/or corrosive.
Inductors are very similar to capacitors in the respect that they also store charge but instead of an electrostatic field they store it in a magnetic field. Again you’ve probably seen them if you’ve cracked open any modern device (or say looked inside your computer) as they look like little circles of metal with wire coiled around them. They’re often referred to as “chokes” as they tend to oppose the current that induces the magnetic field within them and at high frequencies they’ll appear as a break in the circuit, useful if you’re trying to keep alternating current out of your circuit.
For quite a long time these 3 components formed the basis of all electrical theory and nearly any component could be expressed in terms of them. However back in 1971 Leon Chua explored the symmetry between these fundamental components and inferred that there should be a 4th fundamental component, the Memristor. The name is a combination of memory and resistor and Chua stated that this component would not only have the ability to remember its resistance, but also have it changed by passing current through it. Passing current in one direction would increase the resistance and reversing it would decrease it. The implications of such a component would be huge but it wasn’t until 37 years later that the first memristor was created by researchers in HP’s lab division.
What’s really exciting about the memristor is its potential to replace other solid state storage technologies like Flash and DRAM. Due to memristor’s simplicity they are innately fast and, best of all, they can be integrated directly onto the chip of processors. If you look at the breakdown of a current generation processor you’ll notice that a good portion of the silicone used is dedicated to cache, or onboard memory. Memristors have the potential to boost the amount of onboard memory to extraordinary levels, and HP believes they’ll be doing that in just 18 months:
Williams compared HP’s resistive RAM technology against flash and claimed to meet or exceed the performance of flash memory in all categories. Read times are less than 10 nanoseconds and write/erase times are about 0.1-ns. HP is still accumulating endurance cycle data at 10^12 cycles and the retention times are measured in years, he said.
This creates the prospect of adding dense non-volatile memory as an extra layer on top of logic circuitry. “We could offer 2-Gbytes of memory per core on the processor chip. Putting non-volatile memory on top of the logic chip will buy us twenty years of Moore’s Law, said Williams.
To put this in perspective Intel’s current flagship CPU ships with a total of 8MB of cache on the CPU and that’s shared between 4 cores. A similar memristor based CPU would have a whopping 8GB of on board cache, effectively negating the need for external DRAM. Couple this with a memristor based external drive for storage and you’d have a computer that’s literally decades ahead of the curve in terms of what we thought was possible, and Moore’s Law can rest easy for a while.
This kind of technology isn’t you’re usual pie in the sky “it’ll be available in the next 10 years” malarkey, this is the real deal. HP isn’t the only one looking into this either, Samsung (one of the world’s largest flash manufacturers) has also been aggressively pursuing this technology and will likely début products around the same time. For someone like me it’s immensely exciting as it shows that there are still many great technological advances ahead of us, just waiting to be uncovered and put into practice. I can’t wait to see how the first memristor devices perform as it will truly be a generational leap ahead in technology.
It’s been about 2 and a half years since we first heard about the National Broadband Network although back then it was a much different beast than what it has become. Initially the NBN was mostly going to be a project that was only given initial seed funding from the government with the rest to come from private industry backers. That proposal fell flat on its face when none of the bidders were able to provide a serious proposal and it then transformed into a fully government funded project, to the tune of $47 billion. Keeping the project alive was one of the key points in swinging the election towards Labor’s win, albeit at the cost of deploying to regional towns first instead of major cities as it was planned.
The initial stages in Tasmania have been rolling out for some time and the stage 2 deployments in select regional towns on the mainland have also started. Just last week however brings news that the first 14,000 residents who have been connected to the NBN can now sign up for plans with their respective ISPs, signalling the beginning of the commercial NBN:
From tomorrow, the 14,000 residents whose homes have been passed by the National Broadband Network’s first release site roll-out and aren’t already locked into alternate contracts with their internet service provider will be able to order an NBN service.
“The launch of commercial services over the fibre network in the mainland First Release Sites marks a significant milestone for the delivery of the NBN. It is the start of a new era of service and competition as providers begin to offer a range of different plans over our open-access wholesale network,” NBN Co head of product development and sales, Jim Hassell, said in a statement.
From just an idea to first light in under 3 years is pretty good by government standards, especially when the project is scheduled to run for at least another 5. The competition for consumers has also begun to heat up as well with iiNet undercutting Internode, forcing them to rework their plan (it now currently stands at the same price, but with 30GB of data). This is great news for us consumers because it means by the time the NBN is available to a much wider audience prices will probably be forced even lower once the economies of scale start to kick in.
Even at these early stages however the current plans available are quite comparable to their ADSL counterparts. For example I’m on an ADSL2+ connection with 250GB of data (one of their older plans I believe) with a $10 “power pack” that makes my uploads not count and gives me a static IP address. The NBN equivalent is their silver plan, which is 25 down/5Mbps up, comes in at $74.95 for 300GB a saving of approximately $20/month over what I’m currently paying. For the same price I can get the top tier of bandwidth along with an extra 50GB of data, which is quite amazing for a service that’s only available to 14,000 people.
How long it will be before such services are available to a good chunk of the Australian populace remains a mystery however. The current rollout map only goes up to Stage 2 which is only a few dozen locations and I haven’t been able to source any rollout plans past that. From the rumours I’ve heard major cities should be the next stage after the current one, but even then rollouts in those areas will take a long time to complete, especially if the TransACT rollout in Canberra is anything to go by.
All of this is pointing towards a very bright future for Australia and the NBN. No future government would risk cancelling a project that is this far under way, especially with the potential benefits for both consumers and business. The pricing being competitive with current ADSL plans means that there will be a real incentive for people to switch to the NBN once it becomes available and it will only get better in the future. I’m really looking forward to being able to be part of the NBN once it becomes available, even though I know it will be a long time coming.
It’s really no secret that the earlier that a game review gets out the more likely it is that more people will read it. For the most part it’s held true for the reviews I’ve done as people tend to look for info about the game mostly before or just after its release, usually to scope out whether they should buy it or not. Being one of the unwashed masses I’m not privy to early releases of games (except for one solitary exception with Modern Warfare 3) so all my game reviews usually come out weeks after the major sites have already posted theirs, usually with 1 or 2 follow ups afterwards. Still I continue to write them because they’re the easiest writing I’ve ever done and I have an incredibly fun time doing so. Some of my reviews have also been decently popular so I know there’s some value in them for my readers out there.
As to what value people were actually getting from my late in the piece reviews though wasn’t all that clear to me. Of course there are some who use them to inform their purchasing decisions (although no one’s told me of that) and a few will just be my regular readers catching up on my latest ramblings. I knew quite a few people stumbled onto my site when they were looking for wallpapers for particular games or screenshots of certain characters which might not be available anywhere else. However after reading a couple early reviews of certain games I started to realize why reviews like mine are important.
They give the companies a chance to fix broken things.
Take for instance this weeks review of Dead Island. I didn’t get the game on launch day because of the price but I happily snapped it up about a week after it was released. Had I got it on launch day and attempted to play it I would’ve been greeted with the developer build which was buggy, filled with odd shortcuts like turning on no-clip and overall a relatively unpleasant experience. Since I tend to avoid game reviews for games I myself intend to review I wouldn’t have known about these issues and would’ve panned the game for releasing such a half assed game. Coming into it later than the usual flood of reviews meant I got to experience the game as intended and I believe my review reflects a more accurate picture of what the game developers hoped to release.
Another game I was hoping to review in the future was Rage and of course my platform of choice will be the PC. However according to initial reports its sounding an awful lot like the Dead Island release, with the game being horribly buggy and glitchy. Since I’m still waiting on my pre-order keys to arrive (and the fact that I have probably 3 other games I could be playing at the moment) I haven’t been able to give Rage a go yet, but it seems like giving the game a miss for a week or so might be the best option, just so that I’m not reviewing the current mess that everyone is complaining about.
Delayed reviews then, whilst probably not garnering the same amount of press as their day 1 counterparts, serve to showcase what the game is capable of once you get past the initial bumps. It’s a good thing for small timers like myself who don’t have the privilege of getting early access too, as we have the luxury of taking our time with the games and making sure the experience we’re getting is the best the developers could deliver. If the game is still a smoking wreck at that point then it deserves what’s coming to it, but realistically if it’s an honest mistake (like Dead Island was) then it should be easy to fix it.
Of course I wouldn’t turn down the opportunity to review a game before it was released if I was given the opportunity, hint hint 😉
The technology blogosphere has been rampant with speculation about what the next iPhone would be for the last couple months, as it usually is in the ramp up to Apple’s yearly iPhone event. The big question on everyone’s lips has been whether we’d see an iPhone 5 (a generational leap) or something more like a 4S (an incremental improvement on last year’s model). Mere hours ago Apple announced the latest addition to its smart phone line up: the iPhone 4S. Like the 3GS was to the 3G the iPhone 4S is definitely a step up from its predecessor but it retains the same look and feel, leaving the next evolution in the iPhone space to come next year.
If you compared the 4 and the 4S side by side you’d be hard pressed to tell the difference between them, since both of them sport the same screen. The difference you’d be able to pick up on is the redesigned antenna which has been done to avoid another antennagate fiasco. The major differences are on the inside with the iPhone 4S sporting a new dual-core A5 processor, 8 megapixel camera capable of 1080p video, and a combined quadband GSM and CDMA radio. Spec wise the iPhone 4S is a definite leap up from the 4, but how does it compare to other handsets that are already available?
Siri is a personal digital assistant which is based around interpreting natural language. At it’s heart Siri is a voice command and dictation engine, being able to translate human speech into actions on the iPhone 4S. From the demos I’ve seen on the site it’s capabilities are quite high and varied, being able to do rudimentary things like setting appointments to searching around you for restaurants and sorting them by rating. Unlike other features which have been reto-fitted onto the previous generation Siri will not be making an appearance on anything less than the iPhone 4S thanks to the intensive processing requirements. It’s definitely an impressive feature, but I’m sceptical as to whether this will be the killer app to drive people to upgrade.
Now I was doubtful of how good the voice recognition could really be, I mean if YouTube’s transcribe audio to captions service is anything to go by voice recognition done right is still in the realms of black magic and sorcery. Still there are reports that it works exactly as advertised so Apple might have been able to get it right enough that it passes as usable. The utility of talking into your phone to get it to do something remains in question however as whilst voice commands are always a neat feature to show off for a bit I’ve never met anyone who’s used them consistently. My wife does her darnedest to use the voice command whenever she can but 9 times out of 10 she wastes more time getting it to do the right thing than she would have otherwise. Siri’s voice recognition might be the first step towards making this work, but I’ll believe it when you can use it when in a moving car or when someone else is talking in the room.
Will I be swapping out my S2 for an iPhone 4s? Nope, there’s just nothing compelling enough for me to make the switch although I could see myself being talked into upgrading the wife’s aging 3GS for this newer model. In fact I’d say 3GS and below owners would be the only ones with a truly compelling reason to upgrade unless the idea of talking at your phone is just too good to pass up. So overall I’d say my impression of the 4S is mixed, but that’s really no different from my usual reaction to Apple product launches.
Windows 7, whilst being around for quite a while in some form, has only been officially available for just on 2 years. It’s successor, the ingeniously named Windows 8, is scheduled to hit the markets late sometime next year or around 3 years since its predecessors release. Should that stay on schedule Microsoft will be on track to keeping its promise of releasing new versions of Windows every 3 years or so, hopefully avoiding the long development cycle that plagued Vista and signalling to corporate IT that yes XP really is about to die. As part of their recent BUILD conference Microsoft released a developer preview of Windows 8, aimed at those looking to have a play with the up and coming OS and get developers started on building apps for the platform. I’ve had my hands on a copy for the past week or so and I’ve given it the once over, with some rather interesting results.
Windows 8 installs just like its predecessor does, although this one required me to break out one of my dual-layered DVDs in order to fit the image onto a single disk. The difference begins when it comes to configuring Windows 8 once the install has completed. Most noticeably the UI at these stages has been completely redone in the Metro style, signalling that Microsoft believes this will be the main way in which people will use their computers in the future. In a similar vein to what Apple has long done Microsoft now gives you the option of signing into your PC with a Windows Live account, allowing you to sync certain settings with the cloud. For both tablets and desktop PCs alike this will be a good feature for your average home user, especially if Microsoft includes some automated backup of say the My Documents folder to a user’s SkyDrive account.
The first screen (pictured above) is what will be presented to users after their first login. Although there might be some familiar names on there (like Internet Explorer and Control panel) these items are in fact Metro applications based on the new WinRT framework. The darker green backgrounded icons are shortcuts to the traditional desktop applications and the desktop itself can be accessed by the aptly named Desktop shortcut. It’s quite obvious that this interface is designed with touch in mind as the icons are massive compared to their predecessors counterparts and navigation comes by the way of swiping mouse motions or using the mouse wheel. I can see this interface replacing the regular Windows desktop for a lot of users, especially if the app scene is comparable to Apple’s.
Diving into the desktop interface reveals a few new features. Gone are the rounded corners that we’ve become used to since Vista and back are the sharp angular edges that are somewhat reminiscent of Windows XP. The aero translucency is still around however which I’ve always loved but it will still be there to offend those die hard “windows classic” fans. The major change you’ll notice is the addition of the ribbon bar at the top of the explorer window. Now the ribbon always seems to be a point of contention and I’ll be honest I hated it too when I first saw it. In Office though it made quite a lot of sense and I’ve grown to like it. For Explorer on the other hand I’m not so sure, since all of the items on there are all familiar context menu items or keyboard shortcuts. Thankfully you can hide the entire thing by clicking the little carrot in the right hand corner, so it’s a non-issue.
Gone is the start menu as well being outright replaced by the new Metro interface you saw earlier. Clicking the start button or hitting the windows key will spin you right out of desktop mode and into Metro, although it seems to be dependant on the hardware you installed it on. In a virtual machine that seems to be the default behaviour but on my physical test box I was able to get up a context menu of a couple options (log off, switch user, etc). This is somewhat disconcerting for an admin user like myself who’s become quite accustomed to finding most things by hitting the Windows key then typing in what I want (called Windows Desktop Search). It’s still available through Windows + F however, but only in Metro form:
However as an OS it’s pretty much just Windows 7 underneath all the Metro changes as I haven’t found anything significant under the hood that isn’t already in Windows 7. This is both good and bad as it means that’ll be a somewhat easy transition for administrators to change users over but there doesn’t seem to be a whole lot of innovation apart from Metro and WinRT. Of course this is still very much an alpha type product (the UI is constantly breaking in my virtual, it’s slightly better on physical hardware) so there could be a lot of stuff that’s just not turned on or not yet implemented. I’m sure the next year will bring a lot of changes to the OS in both visual and non-visual aspects, so I’ll reserve judgement until it’s more feature complete.
For what it’s intended for though (I.E. to get developers working on Metro apps)? This build seems perfect for that. I’ve yet to tinker with building an application past starting up Visual Studio to see if it works but the build is functional enough to test out everything that a budding app developer would need to. It’s far from being usable as an everyday machine though, even as an early adopter. I’d say we’re about 6 months away from it being ready in that kind of form much like its predecessor was before it. There’s still a lot I haven’t had the chance to fiddle with yet so I’ll probably be revisiting Windows 8 a couple times, as well as the new Visual Studio.