It’s no secret that I’m a big fan of my Samsung Galaxy S2, mostly because the specifications are enough to make any geek weak at the knees. It’s not just geeks that are obsessed with the phone either as Samsung has moved an impressive 10 million of them in the 5 months that its been available. Samsung has made something of a name for itself in being the phone manufacturer to have if you’re looking for an Android handset, especially when you consider Google used their original Galaxy S as the basis for their flagship phone the Nexus S. Rumours have been circulating for a while that Samsung would once again be the manufacturer of choice, a surprising rumour considering they had just sunk a few billion into acquiring Motorola.
Yesterday however saw the announcement of Google’s new flagship phone the Galaxy Nexus and sure enough it’s Samsung hardware that’s under the hood.
The stand out feature of the Galaxy Nexus is the gigantic screen, coming in at an incredible 4.65 inches and a resolution of 1280 x 720 (the industry standard for 720p). That gives you a PPI of 315 which is slightly below the iPhone 4/4S’ retina screen which comes in at 326 PPI which is amazing when you consider it’s well over an inch bigger. As far as I can tell it’s the highest resolution on a smart phone in the market currently and there’s only a handful of handsets that boast a similar sized screen. Whether this monster of a screen will be a draw card though is up for debate as not all of us are blessed with the giant hands to take full advantage of it.
Under the hood it’s a bit of a strange beast, especially when compared to its predecessors. It uses a Texas Instruments OMAP 4460 processor (dual core, 1.2GHz) instead of the usual ARM A9 or Samsung’s own Exynos SOC coupled with a whopping 1GB of RAM. The accompanying hardware includes a 5MP camera capable of 1080p video, all the usual connectivity options with the addition of NFC and wireless N and, strangely enough, a barometer. The Galaxy Nexus does not feature expandable storage like most of its predecessors did, instead coming in 16GB and 32GB variants. All up it makes for a phone that’s definitely a step up from the Galaxy S2 but not in every regard with some features on par or below that of the S2.
Looking at the design of the Galaxy Nexus I couldn’t help but notice that it had sort of regressed back to the previous design style, being more like the Galaxy S rather than the S2. As it turns out this is quite deliberate as Samsung designed the Galaxy Nexus in such a way as to avoid more lawsuits from Apple. It’s rather unfortunate as the design of the Galaxy S2 is really quite nice and I’m not particularly partial to the rounded look at all. Still I can understand why they want to avoid more problems with Apple, it’s a costly exercise and neither of them are going to come out the other side smelling of roses.
Hand in hand with the Galaxy Nexus announcement Google has also debuted Ice Cream Sandwich, the latest version of the Android OS. There’s a myriad of improvements that I won’t go through here (follow the link for a full run down) but notable features are the ability to unlock your phone by it recognizing your face, integrated screen capture (yes, that hasn’t been a default feature for this long), a NFC sharing app called Android Beam and a better interface for seeing how much data you’re using that includes the ability to kill data hogging apps. Like the Galaxy Nexus itself Ice Cream Sandwich is more of an evolutionary step rather than being revolutionary but it looks like a worthy compliment to Google’s new flagship phone.
The Galaxy Nexus shows that Samsung is very capable of delivering impressive smart phones over and over again. The hardware, for the most part, is quite incredible bringing features to the table that haven’t yet been seen before. Ice Cream Sandwich looks to be a good upgrade to the Android operating system and coupled with the Galaxy Nexus the pair will make one very desirable smart phone. Will I be getting one of them? Probably not as my S2 is more than enough to last me until next year when I’ll be looking to upgrade again, but I can’t say I’m not tempted
Voice controlled computers and electronics have always been a staple science fiction, flaunting with the idea that we could simply issue commands to our silicone based underlings and have them do our bidding. Even though technology has come an incredibly long way in the past couple decades understanding natural language is still a challenge that remains unconquered. Modern day speech recognition systems often rely on key words in order to perform the required commands, usually forcing the user to use unnatural language in order to get what they want. Apple’s latest innovation, Siri, seems to be a step forward in this regard and could potentially signal in a shift in the way people use their smartphones and other devices.
On the surface Siri appears to understand quite a bit of natural language, being able to understand that a single task can be said in several different ways. Siri also appears to have a basic conversational engine in it as well so that it can interpret commands in the context of what you’ve said to it before. The scope of what Siri can do however is quite limited but that’s not necessarily a bad thing as being able to nail a handful of actions from natural language is still leaps and bounds above what other voice recognition systems are currently capable of.
Siri also has a sense of humour, often replying to out of left field questions with little quips or amusing shut downs. I was however disappointed with the response for a classic nerd line of “Tea. Earl Grey. Hot” which recieved the following response:
This screen shot also shows that Siri’s speech recognition isn’t always 100% either, especially when it’s trying to guess what you were saying.
Many are quick to draw the comparison between Android’s voice command system and apps available on the platform like Vlingo. The big difference there though is that these services are much more like search engines than Siri, performing the required actions only if you utter the commands and key words in the right order. That’s the way nearly all voice operated systems have worked in the past (like those automated call centres that everyone hates) and are usually the reason why most people are disappointed in them. Siri has the one up here as people are being encouraged to speak to it in a natural way, rather than changing the way they speak in order to be able to use it.
For all the good that Siri is capable of accomplishing it’s still at it’s heart a voice recognition system and with that comes some severe limitations. Ambient noise, including others talking around you, will confuse Siri completely making it unusable unless you’re in relatively quite area. I’m not just saying this as a general thing either, friends with Siri have mentioned this as one of its short comings. Of course this isn’t unique to Siri and is unlikely to be a problem that can be overcome by technology alone (unless you could speak to Siri via a brain implant, say).
Like many other voice recognition systems Siri is geared more toward the accent of the country it was developed in, I.E. American. This isn’t just limited to the different spellings between say the Queen’s English and American English but also for the inflections and nuances that different accents introduce. Siri will also fall in a crying heap if the pronunciation and spelling are different as well, again limiting its usefulness. This is a problem that can and has been overcome in the past by other speech recognition systems and I would expect that with additional languages for Siri already on the way that these kinds of problems will eventually be solved.
A fun little fact that I came across in my research for this post was that Apple still considered Siri to be a beta product (right at the bottom, in small text that’s easy to miss). That’s unusual for Apple as they’re not one to release a product unfinished, even if that comes at the cost of features not making it in. In a global sense Siri really is still beta with some of her services, like Yelp and location based stuff, not being available to people outside of the USA (like the above screenshot shows). Apple is of course working to make them all available but it’s quite unusual for them to do something in this fashion.
So is Siri the next step in user interfaces? I don’t think so. It’s a great step forward for sure and there will be people who make heavy use of it in their daily activities. However once the novelty wears off and the witty responses run out I don’t see a compelling reason for people to continue using Siri. The lack of a developer API as well (and no mention of whether one will be available) means that the services that can be hooked into Siri are limited to those that Apple will develop, meaning some really useful services might never be integrated forcing users to go back to native apps. Depending on how many services are excluded people may just find it easier to not use Siri at all, opting for the already (usually quite good) native app experience. I could be proven wrong on this, especially with technology like Watson on the horizon, but for now Siri’s more of a curiosity than anything else.
The technology blogosphere has been rampant with speculation about what the next iPhone would be for the last couple months, as it usually is in the ramp up to Apple’s yearly iPhone event. The big question on everyone’s lips has been whether we’d see an iPhone 5 (a generational leap) or something more like a 4S (an incremental improvement on last year’s model). Mere hours ago Apple announced the latest addition to its smart phone line up: the iPhone 4S. Like the 3GS was to the 3G the iPhone 4S is definitely a step up from its predecessor but it retains the same look and feel, leaving the next evolution in the iPhone space to come next year.
If you compared the 4 and the 4S side by side you’d be hard pressed to tell the difference between them, since both of them sport the same screen. The difference you’d be able to pick up on is the redesigned antenna which has been done to avoid another antennagate fiasco. The major differences are on the inside with the iPhone 4S sporting a new dual-core A5 processor, 8 megapixel camera capable of 1080p video, and a combined quadband GSM and CDMA radio. Spec wise the iPhone 4S is a definite leap up from the 4, but how does it compare to other handsets that are already available?
Siri is a personal digital assistant which is based around interpreting natural language. At it’s heart Siri is a voice command and dictation engine, being able to translate human speech into actions on the iPhone 4S. From the demos I’ve seen on the site it’s capabilities are quite high and varied, being able to do rudimentary things like setting appointments to searching around you for restaurants and sorting them by rating. Unlike other features which have been reto-fitted onto the previous generation Siri will not be making an appearance on anything less than the iPhone 4S thanks to the intensive processing requirements. It’s definitely an impressive feature, but I’m sceptical as to whether this will be the killer app to drive people to upgrade.
Now I was doubtful of how good the voice recognition could really be, I mean if YouTube’s transcribe audio to captions service is anything to go by voice recognition done right is still in the realms of black magic and sorcery. Still there are reports that it works exactly as advertised so Apple might have been able to get it right enough that it passes as usable. The utility of talking into your phone to get it to do something remains in question however as whilst voice commands are always a neat feature to show off for a bit I’ve never met anyone who’s used them consistently. My wife does her darnedest to use the voice command whenever she can but 9 times out of 10 she wastes more time getting it to do the right thing than she would have otherwise. Siri’s voice recognition might be the first step towards making this work, but I’ll believe it when you can use it when in a moving car or when someone else is talking in the room.
Will I be swapping out my S2 for an iPhone 4s? Nope, there’s just nothing compelling enough for me to make the switch although I could see myself being talked into upgrading the wife’s aging 3GS for this newer model. In fact I’d say 3GS and below owners would be the only ones with a truly compelling reason to upgrade unless the idea of talking at your phone is just too good to pass up. So overall I’d say my impression of the 4S is mixed, but that’s really no different from my usual reaction to Apple product launches.
It was just under 2 years ago when I wrote my first (and only) post on smartphone virtualization approaching it with the enthusiasm that I do with most cool new technologies. At the time I guessed that VMware would eventually look to integrate this idea with some of their other products, in essence turning user’s phones into dumb terminals so that IT administrators could have more control over them. However the exact usefulness was still not clear as at the time most smartphones were only just capable of running a single instance, let alone another one with all the virtualization trimmings that’d inevitably slow it down. Android was also somewhat of a small time player back then as well having only 5% of the market (similar to Windows Phone 7 at the same stage in its life, funnily enough) making this a curiosity more than anything else.
Of course a lot has changed in the time between that post and now. Then market leader, RIM, is now struggling with single digit market share when it used to make up almost half the market. Android has succeeded in becoming the most popular platform surpassing Apple who maintained the crown for many years prior. Smartphones have also become wildly more powerful as well, with many of them touting dual cores, oodles of RAM and screen resolutions that would make my teenage self green with envy. With this all in mind then the idea of running some kind of virtualized environment on a smartphone doesn’t seem all that ludicrous any more.
Increasingly IT departments are dealing with users who want to integrate their mobile devices with their work space in lieu of using a separate, work specific device. Much of this pressure came initially from the iPhone with higher ups wondering why they couldn’t use their devices to access work related data. For us admin types the reasons were obvious: it’s an unapproved, untested device which by rights has no business being on the network. However the pressure to capitulate to their demands was usually quite high and work arounds were sought. Over the years these have taken many various forms, but the best answer would appear to lie within the world of smartphone virtualization.
VMware have been hard at work creating full blown virtualization systems for Android that allow a user to have a single device that contains both their personal handset as well as a secure, work approved environment. In essence they have an application that allows them to switch between the two of them, allowing the user to have whatever handset they want whilst still allowing IT administrators to create a standard, secure work environment. Android is currently the only platform that seems to support this wholly thanks to its open source status, although there are rumours of it coming to the iOS line of devices as well.
It doesn’t stop there either. I predicted that VMware would eventually integrate their smartphone virtualization technology into their View product, mostly so that the phones would just end up being dumb terminals. This hasn’t happened exactly, but VMware did go ahead and imbue their View product with the ability to present full blown workstations to tablet and smartphones through a secure virtual machine running on said devices. This means that you could potentially have your entire workforce running off smartphones with docking stations, enabling users to take their work environment with them wherever they want to go. It’s shockingly close to Microsoft’s Three Screens idea and with Google announcing that Android apps are now portable to Google TV devices you’d be forgiven for thinking that they outright copied the idea.
For most regular users though these kinds of developments don’t mean a whole lot, but it is signalling the beginning of the convergence of many disparate experiences into a single unified one. Whilst I’m not going to say that anyone one platform will eventually kill off the other (each one of the three screens has a distinct purpose) we will see a convergence in the capabilities of each platform, enabling users to do all the same tasks no matter what platform they are using. Microsoft and VMware are approaching this idea from two very different directions with the former unifying the development platform and the latter abstracting it away so it will be interesting to see which approach wins out or if they too eventually converge.
No beating around the bush on this one, Steve Jobs has resigned:
To the Apple Board of Directors and the Apple Community:
I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.
I hereby resign as CEO of Apple. I would like to serve, if the Board sees fit, as Chairman of the Board, director and Apple employee.
As far as my successor goes, I strongly recommend that we execute our succession plan and name Tim Cook as CEO of Apple.
I believe Apple’s brightest and most innovative days are ahead of it. And I look forward to watching and contributing to its success in a new role.
I have made some of the best friends of my life at Apple, and I thank you all for the many years of being able to work alongside you.
The news shouldn’t come as a shock to anyone. Jobs has been been dealing with health problems for many years now and he’s had to scale back his involvement with the company as a result. The appointment of Tim Cook as the new CEO shouldn’t come as a surprise either as Cook has been acting as the interim CEO when Jobs has been absence during the past few years. Jobs’ involvement in Apple won’t completely cease either if the board approves his appointment which I doubt they’ll think twice about doing. The question on everyone’s lips is, of course, where Apple will go to from here.
The stock market understandably reacted quite negatively with Apple shares coming down a whopping 5.23% a the time of writing. The reasons behind this are many but primarily it comes down to the fact that Apple, for better or for worse, has built much of their image around their iconic CEO. Jobs has also had strong influences over the design of new products but Cook, whilst being more than capable of stepping up, has no such skills being more of a traditional operations guy. Of course no idea exists in a vacuum and I’m sure the talented people at Apple will be more than capable of continuing to deliver winning products just as they did with Jobs at the helm.
But will that be enough?
For the most part I’d say yes. Whilst the Jobs fan club might be one of the loudest and proudest out there the vast majority of Apple users are just interested in the end product. Whilst they might lose Jobs’ vision for product design (although even that’s debatable since he’s still on the board) Apple has enough momentum with their current line of products to carry them over any rough patches whilst they find their feet in a post Jobs world. The stock market’s reaction is no indicator of consumer confidence for Apple and I’m sure there’s only a minority of people who’ve decided to stop buying Apple products now that Jobs isn’t at the helm.
Apple’s current success is undeniably because of Jobs’ influence and his absence will prove to be a challenge for Apple to overcome. I highly doubt that Apple will suffer much because of this (the share price really only affects the traders and speculators) with a year or two of products in the pipeline that Jobs would have presided over. The question is will their new CEO, or any public face of Apple, be able to cultivate a similar image on the same level as Jobs did.
So last Friday saw the announcement that HP was spinning off their WebOS/Tablet division, a move that sent waves through the media and blogosphere. Despite being stuck for decent blog material on the day I didn’t feel the story had enough legs to warrant investigation, I mean anyone but the most dedicated of WebOS fans knew that the platform wasn’t going anywhere fast. Heck it took me all of 30 seconds on Google to find these latest figures that have it pegged at somewhere around 2%, right there with Symbian (those are smart phone figures, not overall mobile) trailing the apparently “failing” Windows Phone 7 platform by a whopping 7%. Thus the announcement that they were going to dump the whole division wasn’t so much of a surprise and set about trying to find something more interesting to write about.
Over the weekend though the analysts have got their hands on some juicy details that I can get stuck into.
Now alongside the announcement that WebOS was getting the boot HP also announced that it was considering exiting the PC hardware business completely. At the moment that would seem like a ludicrous idea as that division was their largest with almost $10 billion in revenue but their enterprise services division (which is basically what used to be EDS) is creeping up on that quite quickly. Such a move also wouldn’t see them exit the server hardware business either which would be a rather suicidal move from them considering they’re the second largest player there with 30% of the market. More it seems like HP wants out of the consumer end of the market and wants to focus on enterprise software, services and the hardware that supports them.
It’s a move that several similar companies have taken in the past when faced with downwards trending revenues in the hardware sector. Back when I worked at Unisys I can remember them telling me about how they now derive around 70% of their revenue from outsourcing initiatives and only 30% from their mainframe hardware sales. They used to be a mostly hardware oriented company but switched to professional services and outsourcing when they had negative growth for several years. HP on the other hand doesn’t seem to be suffering any of these problems, which begs the question why would they bother exiting what seems to be a lucrative market for them?
It was a question I hadn’t really considered until I read this post from John Gruber. Now I’d known that HP had gotten a new CEO since Mark Hurd was ejected over that thing with former PlayBoy Girl Jodie Fisher (and his expense account, but that’s no where near as fun to write) but I hadn’t caught up with who they’d hired as his replacement. Turns out it is former SAP CEO Leo Apotheker. Now their decisions to spin off their WebOS (and potentially their PC division) make a lot of sense as that’s the kind of company Apotheker has quite a lot of experience in. Couple that with their decision to buy Autonomy, another enterprise software company, it seems almost certain that HP is heading towards the end goal of being a primarily serviced based company.
Of course with HP exiting the consumer market after only being in it for such a short time people started to wonder if there was ever going to be a serious competitor to Apple’s offerings, especially in the tablet market. Indeed it doesn’t look good for anyone trying to crack into that market as it’s pretty much all Apple all the time and if a multi-billion dollar company can’t do it then there’s not much hope for anyone else. However Android has made some impressive inroads into this Apple dominated niche, securing a solid 20% of the market. Just like it did with the iPhone before it no single vendor will come to completely decimate Apple in this space but overall Android’s dominance will come from the sheer variety that they offer. We’ve still yet to see Galaxy S2-esque release in the Android tablet space but I’m sure one’s not too far off.
It’ll be interesting to see how HP evolves itself over the next year or so under Apotheker’s leadership as it’s current direction is vastly different to that of the HP in the past. This isn’t necessarily a good or bad thing for the company either as whilst they might have any cause for concern now this transition could avoid the pain of attempting to do it further down the track. The WebOS split off is just the first step in this long journey for HP and there will be many more for them to take if they’re to make the transition to a professional services company.
For the past year I was somewhat of an anomaly amongst my tech friends because I choose to get an iPhone 3GS instead of one of the Android handsets. The choice was simple at the time, I had an app that I wanted to develop for it and needed something to test on, but still I copped it sweet whenever I said something positive about the platform since I’d usually be the only one with an Apple product in the area. When it came time again to buy a new phone, as I get to do every year for next to nothing, I resisted for quite a while, until one of my friends put me onto the Samsung Galaxy S2¹. The tech specs simply overwhelmed my usual fiscal conservativeness and no less than a week later was I in possession of one and so began my experience with the Android platform.
The default UI that comes with all of Samsung’s Android handsets, called TouchWiz, feels uncannily similar to that of iOS. In fact it’s so familiar that Apple is suing Samsung because of it, but if you look at many other Android devices you’ll see that they share similar characteristics that Apple is claiming Samsung ripped off from them. For me personally though the Android UI wins out simply because of how customizable it is allowing me to craft an experience that’s tailored to my use. Widgets, basically small front ends to your running applications, are a big part of this enabling me to put things like a weather ticker on my front page. The active wallpapers are also pretty interesting too, if only to liven up the otherwise completely static UI.
What impresses me most about the Android platform is the breadth and depth of the applications and tweaks available for the system. My first few days with Android were spent just getting myself back up and running like I was on my iPhone, finding all the essential applications (Facebook, Twitter, Shazam, Battle.net Authenticator, etc) and comparing the experience to the iPhone. For the most part the experience on Android is almost identical, especially with applications that have large user bases, but some of them were decidedly sub-par. Now most would say that this is due to the fragmentation of the Android platform but the problems I saw didn’t stem from those kinds of issues, just a lack of effort on their part to polish the experience. This more often happened for applications that weren’t “Android born” as many of the native apps were leaps and bounds ahead of them in terms of quality.
The depth of integration that applications and tweaks can have with the Android platform is really where the platform shines. Skype, for example, can usurp your outgoing calls and route them through their network which could be a major boon if you’re lucky enough to have a generous data plan. It doesn’t stop with application integration either, there are numerous developers dedicated to making the Android platform itself better through custom kernels and ROMs. The extra functionality that I have unlocked with my phone by installing CF-Root kernel, one that allows me root access, are just phenomenal. I’ve yet to find myself wanting for any kind of functionality and rarely have I found myself needing to pay for it something, unless it was for convenience’s sake.
Android is definitely a technophile’s dream with the near limitless possibilities of an open platform laid out before you. However had you not bothered to do all the faffing about that I did you still wouldn’t be getting a sub-par experience, at least on handsets sporting the TouchWiz interface. Sure you might have to miss out on some of the useful apps (like Titanium Backup) but realistically many of the root enabled apps aren’t aimed at your everyday user. You still get all the benefits of the deep integration with the Android platform where a good 90% of the value will be for most users anyway.
Despite all of this gushing over Google’s mobile love child I still find it hard to recommend it as the platform for everyone. Sure for anyone with a slight technical bent it’s the platform to go for, especially if you’re comfortable modding your hardware, and sure it’s still quite usable for the majority who aren’t. However Apple’s platform does automate a lot of the rudimentary stuff for you (like backing up your handset when you sync it) which Android, as a platform, doesn’t currently do. Additionally thanks to the limited hardware platform you’re far less likely to encounter some unknown issue on iOS than you are on Android which, if you’re the IT support for the family like me, can make your life a whole lot easier.
Android really impressed me straight from the get go and continued to do so as I spent more time getting to know it and digging under the hood to unlock even more value from it. The ability to interact, modify or outright replace parts of the underlying Android platform is what makes it great and is the reason why it’s the number 1 smart phone platform to date. As a long time smart phone user I feel that Android is by far the best platform for both technophiles and regular users alike, giving you the usability you’ve come to expect from iOS with the tweakability that used to be reserved for only for Windows Mobile devices.
Now I just need to try out a Windows Phone 7 device and I’ll have done the mobile platform trifecta.
¹I’m reviewing the handset separately as since Android is available on hundreds of handsets it wouldn’t be fair to lump them together as I did with the iPhone. Plus the Galaxy S2 deserves its own review anyway and you’ll find out why hopefully this week
The last two years have seen a major shake up in the personal computing industry. Whilst I’m loathed to admit it Apple was the one leading the charge here, redefining the smart phone space and changing the way many people did the majority of their computing by creating the wildly successful niche of curated computing (read: tablets). It is then inevitable that many subsequent innovations from rival companies are seen as reactions to Apple’s advances, even if the steps that company is taking are towards a much larger and broader goal than competing in the same market.
I am, of course, referring to Microsoft’s Windows 8 which was just demoed recently.
There’s been quite a bit of news about the upcoming release of Windows 8 with many leaked screenshots and even leaked builds that gave us a lot of insight into what we can expect of the next version of Windows. For the most part the updates didn’t seem like anything revolutionary although things like portable desktops and a more integrated web experienced were looking pretty slick. Still Windows 7 was far from being revolutionary either but the evolution from Vista was more than enough to convince people that Microsoft was back on the right track and the adoption rates reflect that.
However the biggest shift that is coming with Windows 8 was known long before it was demoed: Windows 8 will run on ARM and other System on a Chip (SOC) devices. It’s a massive deviation from Microsoft’s current platform which is wholly x86/x86-64 based and this confirms Microsoft’s intentions to bring their full Windows experience to tablet and other low power/portable devices. The recent demo of the new operating system confirmed this with Windows 8 having both a traditional desktop interface that we’re all familiar with and also a more finger friendly version that takes all of its design cues from the Metro interface seen on all Windows Phone 7 devices.
Looking at all these changes you can’t help but think that they were all done in reaction to Apple’s dominance of the tablet space with their iPad. It’s true that a lot of the innovations Microsoft has done with Windows 8 mirror those of what Apple has achieved in the past year or so however since Windows 8 has been in development for much longer than that not all of them can be credited to Microsoft playing the me-too game. Realistically it’s far more likely that many of these innovations are Microsoft’s first serious attempts at realizing their three screens vision and many of the changes in Windows 8 support this idea.
A lot of critics think the idea of bringing a desktop OS to a tablet form factor is doomed for failure. The evidence to support that view is strong too since Windows 7 (and any other OS for that matter) tablet hasn’t enjoyed even a percentage of the success that the dedicated tablet OS’s have. However I don’t believe that Microsoft is simply making a play for the tablet market with Windows 8, what they’re really doing is providing a framework for building user experiences that remain consistent across platforms. The idea of being capable of completing any task whether you’re on your phone, TV or dedicated computing device (which can be a tablet) is what is driving Microsoft to develop Windows 8 they way they are. Windows Phone 7 was their first steps into this arena and their UI has been widely praised for its usability and design and Microsoft’s commitment to using it on Windows 8 shows that they are trying to blur the lines that current exist between the three screens. The potential for .NET applications to run on x86, ARM and other SOC platforms seals the deal, there is little doubt that Microsoft is working towards a ubiquitous computing platform.
Microsoft’s execution of this plan is going to be vital for their continued success. Whilst they still dominate the desktop market it’s being ever so slowly eroded away by the bevy of curated computing platforms that do everything users need them to do and nothing more. We’re still a long time away from everyone out right replacing all their PCs with tablets and smart phones but the writing is on the wall for a sea change in the way we all do our computing. Windows 8 is shaping up to be Microsoft’s way of re-establishing themselves as the tech giant to beat and I’m sure the next year is going to be extremely interesting for fans and foes alike.
While I might enjoy a good old fashion Apple bashing more than I should I’m still pretty heavily invested in their platform, with me counting an iPhone and MacBook Pro amongst my computing arsenal. Still anyone who’s been reading this blog long enough will know that I’m no fan of the hype that surrounds their products nor the hoard of apologists who try to rework any product fault or missing feature as a symbol of Apple’s “vision” when realistically Apple should cop some flak for it. Today I want to tackle one of the longest standing Apple myths that has still managed to perpetrate itself even in light of the overwhelming evidence to the contrary.
I am talking about, as the title implies, Mac’s apparent immunity to malicious code.
Wind back the clock a few decades and we find ourselves in the dawn of the consumer PC age and with it the initial success of the Apple II series of microcomputers. Back then the notion of a computer virus was almost purely academic with all working viruses never leaving the confines of the places that they were created in. Rich Skrenta, a then 15 year old computer whiz, took it upon himself to code up what would become the very first virus to make it into the wild, he called it Elk Cloner. This particular virus would attach itself to the Apple DOS running on the Apple II and on every 50th boot would display a lovely little poem to the user. Whilst it didn’t cause any actual harm (apart from annoyance) it was able to spread to other floppy disks and was the first virus to overwrite the boot sector so that it would be loaded each time.
That’s right, the first ever in the wild virus was indeed Mac only.
Still there’s a little kernel of truth in the saying that Macs are resistant to malicious code. Whilst most viruses in the past were done to inflict chaos and harm upon their users the last decade saw virus writers make the switch to the more profitable adventures of stealing credit card information, mining data or turning your PC into a zombie to be used for nefarious purposes. Mac’s immunity then came from obscurity as there’s little reason to go to all that effort to only target a small percentage of the worldwide PC user base and so the most favored platform became the most targeted, leaving the Macs relatively untouched.
Still even a small percentage of billions still adds up to multiple millions of people and so some virus writers started to turn their sites towards the Mac platform. Reports started surfacing over the rumors that were circulating and it became official, Macs were now a target. Apologists shot of volleys left and right saying that these were just in a minority and were even doing so right up to the end of last year, stating that the Mac’s immunity remains intact. Today brings news however that not only have Macs made the mainstream for normal users, they’re now mainstream for virus creators:
The kit is being compared to the Zeus kit, which has been one of the more popular and pervasive crimeware kits for several years now. A report by CSIS, a Danish security firm, said that the OS X kit uses a template that’s quite similar to the Zeus construction and has the ability to steal forms from Firefox.
“The Danish IT-security company CSIS Security Group has just yesterday observed a new advanced Form grabber designed for the Mac OS X operating system being advertised on several closed underground forums. In the same way as several other DIY crimeware kits designed for PCs, this tool consists of a builder, an admin panel and supports encryption,” Peter Kruse of CSIS said in a blog post.
Indeed they are now also the targets of scareware campaigns that masquerade themselves as actual virus scanners and with the prevalence of web based malware on the increase the Mac platform only provides immunity against the garden variety botnet software, not the fun stuff like man-in-the-middle attacks or cross site scripting vulnerabilities. Truly if you believe yourself immune to all the threats that the Internet poses simply because you chose the “better” platform you’re simply making yourself far more vulnerable to the inevitable, especially for things like social engineering.
I’m not sure why people continue to perpetuate the myth that Macs are completely immune to the threats of the Internet. It seems to stem from the deep rooted belief that Macs are the better platform (whether they are or not is left up to the reader) and quelling the rumors that Macs can be compromised would seem to strengthen it, somehow. Instead Mac users would be far better served by acknowledging the threats and then building countermeasures to stop them, just like the Windows platform has done before them. It’s not a bad thing, any platform that holds some kind of value will eventually become the target of nefarious forces, and the sooner Mac apologists wake up and admit that they’re not the shining beacons of security they think they are the better the worldwide computing system will be better for it.