Microsoft has been pursuing its unified platform strategy for some time now with admittedly mixed results. The infrastructure to build that kind of unified experience is there, and indeed Microsoft applications have demonstrated that it can be taken advantage of, but it really hasn’t spread to third party developers and integrators like they intended it to. A big part of this was the fact that their mobile offering, Windows Phone, is a very minor player that has been largely ignored by the developer community. Whilst its enterprise integration can’t be beaten the consumer experience, which is key to driving further adoption of the platform, has been severely lacking. Today Microsoft has announced a radical new approach to improving this by allowing iOS and Android apps to run as Universal Applications on the Windows platform.
The approach is slightly different between platforms however the final outcome is the same: applications written for the two current kings of the smartphone world can run as a universal application on supported Windows platforms. Android applications can be submitted in their native APK form and will then run in a para-virtualized environment (includes aspects of both emulation as well as direct subsystem integration). iOS applications on the other hand can, as of today, be compiled directly from Objective-C into Universal Applications that can be run on Windows Phones. Of course there will likely still be some effort required to get the UX inline but not having to maintain different core codebases will mean that the barriers to developing a cross platform app that includes Windows Phone will essentially drop to nothing.
Of course whether or not this will translate into more people jumping onto the Windows Phone ecosystem isn’t something I can readily predict. Windows Phone has been languishing in the single digit market share ever since its inception and all the changes that Microsoft has made to get that number up haven’t made a meaningful impact on it. Having a better app ecosystem will be a drawcard to those who like Microsoft but haven’t wanted to make the transition but this all relies on developers taking the time to release their applications on the Windows Phone platform. Making the dev experience easier is the first step to this but then it’s a chicken and egg problem of not having enough market share to make it attractive for both ends of the spectrum.
Alongside this Microsoft also announced the ability for web pages to use features of the Windows Phone platform, enabling them to become hosted web pages with enhanced functionality. It’s an interesting approach for enabling a richer web experience however it feels like something that should probably be a generalized standard rather than a proprietary tech that only works for one platform. Microsoft has shown that they’re willing to open up products like this now, something they never did in the past, so potentially this could just be the beachhead to see whether or not there’s any interest before they start pushing it to a wider audience.
This is definitely a great step in the right direction for Microsoft as anything they can do to reduce the barrier to supporting their ecosystem will go a long way to attracting more developers to their ecosystem. There’s still a ways to go to making their mobile platform a serious contender with the current big two but should this app portability program pay dividends then there’s real potential for them to start clawing back some of the market share they once had. It’s likely going to be some time before we know if this gamble will pay off for Microsoft but I think everyone can agree that they’re at least thinking along the right lines.
Just like any new tech gadget I’ve been ogling tablets for quite some time. Now I’m sure there will be a few who are quick to point out that I said long ago that an ultrabook fills the same niche, at least for me, but that didn’t stop me from lusting after them for one reason or another. I’d held off on buying one for a long time though as the price for something I would only have a couple uses for was far too high, even if I was going to use it for game reviews, so for a long time I simply wondered at what could be. Well whilst I was at TechEd North America the opportunity to snag a Windows Surface RT came up for the low price of $99 and I, being able to ignore the fiscal conservative in me and relent into my tech lust, happily handed over my credit card so I could take one home with me.
It’s quite a solid device with a noticeable amount of heft in it despite its rather slim figure. Of particular note is the built in kick stand which allows you to sit the Surface upright, something which I’ve heard others wish for with their tablets. It’s clear that the Surface as been designed to be used primarily in landscape mode which is in opposition to most other tablets that utilize the portrait view. For someone like me who’s been a laptop user for a long time this didn’t bother me too much but I can see how it’d be somewhat irritating if you were coming from another platform as it’d be just another thing you’d have to get used to. Other than that it seems to be your pretty standard tablet affair with a few tweaks to give it a more PC feel.
The specifications of it are pretty decent boasting a WXGA (1366 x 768) 16:9 screen powered by a NVIDIA Tegra3 with 2GB RAM behind it. I’ve got the 64GB model which reports 53GB available and 42GB free which was something of a contentious point for many as they weren’t getting what they paid for (although at $99 I wasn’t going to complain). It’s enough that when using it I never noticed any stutter or slow down even when I was playing some of the more graphically intense games on it. I didn’t really try any heavy productivity stuff on it because I much prefer my desktop for work of that nature but I get the feeling it could handle 90% of the tasks I could throw at it. The battery life also appears to be relatively decent although I have had a couple times where it mysteriously came up on 0 charge although that might have been due to my fiddling with the power settings (more on why I did that later).
Since I’ve been a Windows 8 user for a while the RT interface on the Surface wasn’t much of a shock to me although I was a little miffed that I couldn’t run some of my chosen applications, even in desktop mode, notably Google Chrome. That being said applications that have been designed for the Metro interface are usually pretty good, indeed the OneNote app and Cocktail Flow are good examples of this, however the variety of applications available is unfortunately low. This is made up for a little by the fact that the browser on the Surface is far more usable than the one on Windows Phone 7 enabling many of the web apps to work fine. I hope for Microsoft’s sake this changes soon as the dearth of applications on the Surface really limits its appeal.
The keyboard that came with the Surface gets a special mention because of just how horrid it is. Whilst it does a good job of being a protective cover, one that does have a rather satisfying click as the magnets snap in, it’s absolutely horrendous as an input device, akin to typing on a furry piece of cardboard. Since there’s no feedback it’s quite hard to type fast on it and worse still it seems to miss key presses every so often. Probably the worst part about it is that if your surface locks itself with it attached and then you remove it you will then have no way to unlock your device until you re-attach it, even if you’ve set a PIN code up. I’ve heard that the touch cover is a lot better but since it was going for $100 at the time I wasn’t too keen on purchasing it.
The Surface does do a good job of filling the particular niche I had for it, which was mainly watching videos and using it to remote into my servers, but past that I haven’t found myself using it that much. Indeed the problem seems to be that the Surface, at least the non-pro version, is stuck halfway between being a true tablet and a laptop as many of its features are still computer-centric. This means that potential customers on either side of the equation will probably feel like they’re missing something which I think is one of the main reasons that the Surface has struggled to get much market share. The Pro seems to be much closer to being a laptop, enough so that the people I talked to at TechEd seemed pretty happy with their purchase. Whether that translates into Microsoft refocusing their strategy with the Surface remains to be seen, however.
The Surface is a decent little device, having the capabilities you’ve come to expect from a tablet whilst still having that Microsoft Windows feel about it. It’s let down by the lack of applications and dissonance it suffers from being stuck between the PC and tablet worlds, something that can’t be easily remedied by a software fix. The touch cover is also quite atrocious and should be avoided at all costs, even if you’re just going to use it as a cover. For the price I got it for I think it was worth the money however getting it at retail is another story and unless you’re running a completely Microsoft house already I’d struggle to recommend it over an ultrabook or similarly portable computing device.
Windows 7, whilst being around for quite a while in some form, has only been officially available for just on 2 years. It’s successor, the ingeniously named Windows 8, is scheduled to hit the markets late sometime next year or around 3 years since its predecessors release. Should that stay on schedule Microsoft will be on track to keeping its promise of releasing new versions of Windows every 3 years or so, hopefully avoiding the long development cycle that plagued Vista and signalling to corporate IT that yes XP really is about to die. As part of their recent BUILD conference Microsoft released a developer preview of Windows 8, aimed at those looking to have a play with the up and coming OS and get developers started on building apps for the platform. I’ve had my hands on a copy for the past week or so and I’ve given it the once over, with some rather interesting results.
Windows 8 installs just like its predecessor does, although this one required me to break out one of my dual-layered DVDs in order to fit the image onto a single disk. The difference begins when it comes to configuring Windows 8 once the install has completed. Most noticeably the UI at these stages has been completely redone in the Metro style, signalling that Microsoft believes this will be the main way in which people will use their computers in the future. In a similar vein to what Apple has long done Microsoft now gives you the option of signing into your PC with a Windows Live account, allowing you to sync certain settings with the cloud. For both tablets and desktop PCs alike this will be a good feature for your average home user, especially if Microsoft includes some automated backup of say the My Documents folder to a user’s SkyDrive account.
The first screen (pictured above) is what will be presented to users after their first login. Although there might be some familiar names on there (like Internet Explorer and Control panel) these items are in fact Metro applications based on the new WinRT framework. The darker green backgrounded icons are shortcuts to the traditional desktop applications and the desktop itself can be accessed by the aptly named Desktop shortcut. It’s quite obvious that this interface is designed with touch in mind as the icons are massive compared to their predecessors counterparts and navigation comes by the way of swiping mouse motions or using the mouse wheel. I can see this interface replacing the regular Windows desktop for a lot of users, especially if the app scene is comparable to Apple’s.
Diving into the desktop interface reveals a few new features. Gone are the rounded corners that we’ve become used to since Vista and back are the sharp angular edges that are somewhat reminiscent of Windows XP. The aero translucency is still around however which I’ve always loved but it will still be there to offend those die hard “windows classic” fans. The major change you’ll notice is the addition of the ribbon bar at the top of the explorer window. Now the ribbon always seems to be a point of contention and I’ll be honest I hated it too when I first saw it. In Office though it made quite a lot of sense and I’ve grown to like it. For Explorer on the other hand I’m not so sure, since all of the items on there are all familiar context menu items or keyboard shortcuts. Thankfully you can hide the entire thing by clicking the little carrot in the right hand corner, so it’s a non-issue.
Gone is the start menu as well being outright replaced by the new Metro interface you saw earlier. Clicking the start button or hitting the windows key will spin you right out of desktop mode and into Metro, although it seems to be dependant on the hardware you installed it on. In a virtual machine that seems to be the default behaviour but on my physical test box I was able to get up a context menu of a couple options (log off, switch user, etc). This is somewhat disconcerting for an admin user like myself who’s become quite accustomed to finding most things by hitting the Windows key then typing in what I want (called Windows Desktop Search). It’s still available through Windows + F however, but only in Metro form:
However as an OS it’s pretty much just Windows 7 underneath all the Metro changes as I haven’t found anything significant under the hood that isn’t already in Windows 7. This is both good and bad as it means that’ll be a somewhat easy transition for administrators to change users over but there doesn’t seem to be a whole lot of innovation apart from Metro and WinRT. Of course this is still very much an alpha type product (the UI is constantly breaking in my virtual, it’s slightly better on physical hardware) so there could be a lot of stuff that’s just not turned on or not yet implemented. I’m sure the next year will bring a lot of changes to the OS in both visual and non-visual aspects, so I’ll reserve judgement until it’s more feature complete.
For what it’s intended for though (I.E. to get developers working on Metro apps)? This build seems perfect for that. I’ve yet to tinker with building an application past starting up Visual Studio to see if it works but the build is functional enough to test out everything that a budding app developer would need to. It’s far from being usable as an everyday machine though, even as an early adopter. I’d say we’re about 6 months away from it being ready in that kind of form much like its predecessor was before it. There’s still a lot I haven’t had the chance to fiddle with yet so I’ll probably be revisiting Windows 8 a couple times, as well as the new Visual Studio.
Whilst Android has been making solid inroads to the tablet market, snapping up a respectable 26.8%, it’s still really Apple’s market with them holding a commanding lead that no one’s been able to come close to touching. It’s not for a lack of trying though with many big name companies attempting to break into the market only to pull out shortly afterwards, sometimes in blaze of fire sale glory. It doesn’t help matters much that every new tablet will be compared to the iPad thus ensuring every new tablet attempts to one up it in some way, usually keeping a price parity with the iPad but without the massive catalogue of apps that people have come to expect from Apple products.
Apple’s got a great game going here. All of their iDevice range essentially made the market that they’re in, grabbing enough fans and early adopters to ensure their market dominance for years to come. Competitors then attempt to mimic Apple’s success by copying the essential ideas and then attempting to innovate, fighting an uphill battle. Whilst they might eventually lose ground to the massive onslaught of competitors (like they have to Android) they’ll still be one of the top individual companies, if they’re not number 1. It’s this kind of market leading that makes Apple products so desirable to John Q. Public and the reason why so many companies are failing to steal their market share away.
Rumours have been circulating for a while now over Amazon releasing a low cost tablet of some description and of course everyone was wondering whether it would shape up to be the next “iPad killer”. Today we saw the announcement of the Kindle Fire: a 7-inch multi-touch tablet that’s heavily integrated with Amazon’s services and comes at the low low price of only $199.
As a tablet it’s something of an outsider. Foregoing the traditional 9 to 10 inch screen size for a smaller 7 inch display. The processor in it isn’t anything fantastic, being just a step up from the one that powers the Nook Color, but history has shown it’s quite a capable system so the Kindle Fire shouldn’t be a slouch when it comes to performance. There’s also a distinct lack of cameras, 3G and Bluetooth connectivity meaning that the sole connection this tablet has to the outside world will be via your local wifi connection. It comes with an internal 8GB of storage that’s not upgradeable, favouring to store everything on the cloud and download it as required. You can see why this thing wouldn’t work with WhisperNet.
Also absent is any indication that the Kindle Fire is actually an Android device with the operating system being given a total overhaul. The Google App store has been outright replaced by Amazon’s Android app store and the familiar tile interface has been replaced by a custom UI designed by Amazon. All of Amazon services: music, books and movies to name a few, are heavily integrated with the device. Indeed they are so heavily integrated that the tablet also comes with a free month of Amazon Prime, Amazon’s premium service that offers unlimited free 2 day shipping plus access to their entire catalogue of media. At this point calling this thing a tablet seems like a misnomer, it’s much more of a media consumption device.
What’s really intriguing about the Kindle Fire though is the browser that Amazon has developed for it called Silk. Like Opera Mini and Skyfire before it Silk offloads some of the heavy lifting to external servers, namely Amazon’s massive AWS infrastructure. There’s some smarts in the delineation between what should be processed on device and what should be done on the servers so hopefully dynamic pages, which suffered heavily in this kind of configuration, will run a lot better under Silk. Overall it sounds like a massive step up for the usability of the browser on devices like these which is sure to be a great selling point for the Kindle Fire.
The more I read about the Kindle Fire the more I get the feeling that Amazon has seen the game that Apple has been playing and decided to not get caught up in it like their competitors have. Instead of competing directly with the iPad et. al. they’ve created a device that’s heavily integrated with their own services and have put themselves at arms length with Android. John Q. Public then won’t see the Kindle Fire as an Android Tablet nor an iPad competitor, more it’s a cheap media consumption device that’s capable at doing other tasks from a large and reputable company. The price alone is enough to draw people in and whilst the margins on the device are probably razor thin they’ll more than likely make it up in media sales for the device. All those together make the Kindle Fire a force to be reckoned with, but I don’t think current tablet manufacturers have much to worry about.
The Kindle Fire, much like the iPad before it, carves out its own little niche that’s so far be unsuccessfully filled. It’s not a feature laden object of every geek’s affection, more it’s a tablet designed for the masses with a price that competitors will find hard to beat. The deep integration with Amazon’s services will be the feature that ensures the Kindle Fire’s success as that’s what every other iPad competitor has lacked. However there’ll still be a market for the larger, more capable tablets as they’re more appropriate for people seeking a replacement for their laptop rather than a beefed up media player. I probably won’t be buying one for myself, but I could easily see my parents using one of these.
And I’m sure that’s what Amazon is banking on too.
I love me a good widget. Between my daily intake of news from sites like Engadget and Techcrunch I know there’s always something in the pipeline that’s assured to take some of my time and/or money away from me in the future. It’s probably why I stuck around for so long at my first ever job in a retail electronics store as I always got to have a good fiddle with all the wondrous tech that I couldn’t yet afford before putting it on display my clientele. Back then though the Internet was still reeling from the dot com crash and most of the tech I sold didn’t really make use of any on-line services. Today however it’s hard to find a gadget that doesn’t want to phone home for one reason or another, usually to make use of data stored elsewhere.
Realisitically this is a good thing. The whole Web 2.0 revolution has culminated in an online world where sharing any information you have with the wider world is considered the norm and you’d be damned for trying otherwise. This is how the idea for Geon originally came about as a quick search around the web turned up no less than 6 services all ready, willing and able to give up their data to me for no cost at all and in the format I desired. It hasn’t stopped there either as nearly every other week I’m finding yet another service (the latest is Groupon) that will happily provide me with some feed coupled with the geographical co-ordinates I so hungrily desire. I’m not the only one taking advantage of these feeds either and a whole host of mash-up applications are available, and many of them reaping the benefits of the open webs standard of sharing.
Still it’s kind of interesting to note how much trust we put in these open services. Take for instance good old Twitter. Many of the heaviest users don’t use it directly through the web interface, mostly because whilst it’s functional it’s far from the best interface designed for the service. I myself prefer to use Echofon which remembers which tweets I’ve read and gives me a slick interface for uploading pictures and all manner of Twitter related tasks. The only issue really is that I have to provide my raw login details to the application in order to make use of these features. Whilst this isn’t a problem for most people (my Twitter account hasn’t been hijacked…yet) it does mean that in order to make use of this client and the service you have to place a certain amount of trust in them, and this is where things start to get tricky.
There’s been many attempts to get over this problem of how to determine who to trust on the Internet. The most common method currently used is in the form of digital signatures and certificates. In essence this boils down to having some central authority (or authorities, as the case currently is) who verifies that someone is who they say they are. Once they’ve done this they issue them with a digital certificate which proclaims that central authority X verified them, and then they can use that certificate to show that they are who they say they are. Again there’s a certain amount of trust that must then be placed in the central authority but the model has worked (for the most part) with many large companies being trusted central authorities for such activities. Every time you visit a site that gives you that little lock in your browser bar or colours it blue or green it’s that central authority verification in action. This has its problems still since it seems some authorities are a bit lax when it comes to verifying people and the system itself has been shown vulnerable to certain attacks, but you’ll get that with any popular system.
One of the most novel ideas I’ve seen so far was the idea of OAuth. The idea is that you grant an application a token which allows them to access your data on a service. Depending on the token it could be limited to a certain subset of data (say your public timeline on Twitter), valid only for a specific time frame or even valid only for a specific device. There’s still an amount of trust involved however it gives an enormous amount of power to the user to do damage control should an app or service go rogue. Granted such incidents are rare but at least with a system like OAuth you’re not left with any other options than hoping the service provider will fix the problem or trying to do it yourself.
For the most part though the open web has prevented any wide scale skullduggery from apps and services that everyone once trusted. I’d put that down to a good chunk of the big players being either Google or having a heavy involvement with Google who’s policy of “Do no evil” seems to keep most of them honest. Additionally your service or app isn’t long for the Internet world should your users find you’re screwing them in one way or another, although there are some notable exceptions.
None of this bellyaching has stopped me from using a myriad of online services and it never will. As long as you don’t delude yourself about what can happen on the Internet I have no problem with big companies calculating all sorts of metrics on me in exchange for a service I find useful. I still cast a weary eye towards any new player in the Internet field and so should you, but that shouldn’t stop you from using anything online altogether. I guess the point I’m trying to make is that you should be aware of what you’re getting into when you type that magical user name and password into your app of choice, and don’t be surprised when you find out that that free service you were given had a hidden cost.