The date for the final version of Windows has been set: July 29 of this year.
The announcement comes as a shock to no one, Microsoft had repeatedly committed to making Windows 10 generally available sometime this year, however the timing is far more aggressive than I would have expected. The Windows Insider program was going along well although the indications were that most of the builds still had a decidedly beta feel to them along with many features being missing. Indeed the latest build was released just three days ago indicating that a full release was still some time away. Microsoft isn’t one to give soft dates, especially for their flagship OS, so we can take the July 29 date as gospel from here on out.
Since everyone in the Insider program has had their hands on Windows 10 for some time now the list of features likely won’t surprise you however there were a few things that caught my eye in Microsoft’s announcement post. By the looks of it Office 2016 will be released alongside the new version of Windows including a new universal app version that’s geared towards touch devices. Considering how clumsy the desktop Office products felt on touch screens this is a welcome addition for tablet and transformer devices although I’d hazard a guess that the desktop version will still be the preferred one for many. What’s really interesting though is that OneNote and Outlook, long considered staples of the Office suite by many, will now be included in the base version of Windows for free. It’s not a big of an upset as including say Word or Excel would be but still an unexpected move none-the-less.
Many of the decidedly lacklustre default metro apps will get some new life breathed into them with an update to the universal app platform. On the surface this removes their irritating “takes over your entire desktop when launched” behaviour and makes them behave a lot more like a traditional app. Whether or not they’ll be improved to the point of usable beyond that is something that I’ll have to wait and see although I do have to admit that some of the built in apps (like the PDF reader) were quite useful to have. How the well integration between those apps, the cloud and other devices that can run universal apps, works remains to be seen although I’ve heard positive things about this experience in the past.
It seems that Microsoft has had this date in mind for some time now as all my home Windows 8.1 installs last night chirped up with a “Reserve your free Windows 10!” pop up late last night. This is the realisation of the promise Microsoft made back at the start of the year to provide a free Windows 10 update to all current consumer level customers, something I thought would likely be handled through a redemption portal or similar. However, based on the success Microsoft had in getting people to upgrade from 8 to 8.1 with a similar notification, I can see why they’ve taken this approach as it’s far more likely to get people upgrading than a free Windows 10 serial would.
What will be truly interesting to see is if the pattern of adoption continues with major Windows versions. Windows 7, which is now approaching middle age, still remains unchallenged by the previous two upstarts. The barriers to transitioning are now much lower than they once were, however customers have shown that familiarity is something they value above nearly everything else. Windows 10 has all the makings of a Windows version that consumers want but we all know that what people say they want and what they actually want are two different things.
Ah PowerPoint, the thing that everyone seems to loathe when they walk into a meeting yet still, when it comes time for them to present something, it’s the first tool they look to for getting their idea across. Indeed in my professional career I’ve spent many hours standing in front of a projection screen, the wall behind me illuminated by slide after slide of information I was hoping to convey to my audience, jabbering on about what the words behind me meant. It seems that every year there’s someone calling for the death of the defacto presentation tool with them lamenting its use in many well publicised scandals and failures. However like the poor workman who blames his tools PowerPoint is not responsible for much of the ills aimed at it. That, unfortunately, lies with the people who use it.
PowerPoint, like every Microsoft Office product, when put in the hands of the masses ends up being used in ways that it never should have been. This does not necessarily mean the tool is bad, indeed I’d like to see a valid argument for the death of say Word given the grave misuses it has been put to, more that it was likely not the most appropriate medium for the message it was trying to convey or the audience it was presented to. When used in its most appropriate setting, which I contend is as a sort of public prompt card for both the speaker and the audience, PowerPoint works exceptionally well for conveying ideas and concepts. What it’s not great at doing is presenting complex data in a readily digestible format.
But then again there are very few tools that can.
You see many of the grave misgivings that have been attributed to PowerPoint are the result of its users attempting to cram an inordinate amount of information into a single panel, hoping that it somehow all makes its way across to the audience. PowerPoint, on its own, simply does not have the capability to distill information down in that matter and as such relies on the user’s ability to do that. If the user then lacks the ability to do that both coherent and accurately then the result will, obviously, not be usable. There’s no real easy solution to this as creating infographics that convey real information in a digestible format is a world unto itself but blaming the tool for the ills of its users, and thus calling for the banning of its use, seems awfully shortsighted.
Indeed if it was not for PowerPoint then it would be another one of the Microsoft Office suite that would be met with the same derision as they all have the capability to display information in some capacity, just not in the format that most presentations follow. Every time people have lamented PowerPoint to me I’ve asked them to suggest an alternative tool that solves the issues they speak of and every time I have not recieved a satisfactory answer. The fact of the matter is that, as a presentation tool, PowerPoint is one of the top in its class and that’s why so many turn to it. The fact that it’s found at the center of a lot of well publicised problems isn’t because of its problematic use, just that it’s the most popular tool to use.
What really needs to improve is the way in which take intricate and complex data and distill that down to its essence for imparting it on others. This is an incredibly wide and diverse problem space, one that entire companies have founded their business models on. It is not something that we pin on a simple presentation tool, it is a fundamental shift away from thinking that complex ideas can be summed up in a handful words and a couple pretty pictures. Should we want to impart knowledge upon someone else then it is up to us to take them on that journey, crafting an experience that leaves them with enough information for them to be able to impart that idea on someone else. If you’re not capable of doing nor PowerPoint nor any other piece of software will help you.
It seems that the semiconductor industry can’t go a year without someone raising the tired old flag that is the impending doom of Moore’s Law. Nearly every year there’s a group of people out to see it finally meet its end although to what purpose I could not tell you. However as an industry observer will tell you these predictions have, for the past 5 decades, proved to be incorrect as any insurmountable barrier is usually overcome when the requisite billions are thrown at the problem. However we are coming to a point where our reigning champion behind Moore’s Law, namely planar transistors built on silicon, is starting to reach the end of its life and thus we have been searching for its ultimate replacement. Whilst it seems inevitable that a new material will become the basis upon which we build our new computing empire the question of how that material will be shaped is still unanswered, but there are rumblings of what may come.
For the vast majority of computing devices out there the transistors underneath the hood are created in a planar fashion, I.E. they essentially exist in a 2 dimensional space. In terms of manufacturing this has many advantages and the advances we’ve made in planar technology over the years have seen us break through many barriers that threatened to kill Moore’s Law in its tracks. Adding in that additional dimension however is no trivial task and whilst it’s not beyond our capability to do, indeed my computer is powered by a component that makes use of a 3D manufacturing process, but applying it to something as complicated as a CPU requires an incredible amount of effort. However the benefits of doing so are proving to be many and the transistor pictured above, called a Quantum Well Field Effect Transistor (QWFET), could be the ram with which we break through the next barrier to escalating Moore’s Law.
The main driver behind progress in the CPU market comes from making transistors ever-smaller, something which allows us to pack more of them in the same space whilst also giving us benefits like reduced power consumption. However as we get smaller issues that could be ignored, like gate leakage back when we were still at the 45nm stage, start to become fundamental blockers to progress. Right now, as we approach sizes below 10nm, that same problem is starting to rear its head again and we need to look at innovative solutions to tackle it. The QWFET is one such solution as it has the potential to eliminate the leakage problem whilst allowing us to continue our die shrinking ways.
QWFETs are essentially an extension of Intel’s current FinFET technology. In the current FinFETs electrons are bounded on 3 sides which is what helped Intel make their current die shrink workable (although it has taken them much longer than expected to get the yeilds right). In QWFETs the electrons are bounded on an additional side which forms a quantum well inside the transistor. This drastically reduces the leakage which would otherwise plague a transistor of a sub-10nm size and, as a benefit, significantly reduces power draw as the static power usage drops considerably.
This does sound good in principle and would be easy to write off as hot air had Intel not been working on it since at least 2010. Some of their latest research points to these kinds of transistors being the way forward all the way down to 5nm which would keep Moore’s Law trucking along for quite some time considering we’re just on the cusp of 14nm products hitting our shelves. Of course this is all speculative at this time however there’s a lot of writing on the wall that’s pointing to this as being the way forward. If this turns out to not be the case then I’d be very interested to see what Intel had up their sleeves as it’d have to be something even more revolutionary than this.
Either way it’l be great for us supporters of Moore’s Law and, of course, users of computers in general.
After the hubbub that Solar Freakin Roadways caused last year (ranging in tone from hopeful to critical) all seemed to have gone quiet on the potentially revolutionary road surface front. I don’t think anyone expected us to be laying these things down en-masse once the Indiegogo campaign finished but I’ve been surprise that I hadn’t heard more about them in the year that’s gone by. Whilst Solar Roadways might not have been announcing their progress from the rooftops there has been some definitive movement in this space, coming to us from a Dutch company called SolaRoads. Their test track, which was installed some 6 months ago, has proven to be wildly successful which gives a lot of credibility to an idea that some saw as just an elaborate marketing campaign.
The road was constructed alongside a bike path totalling about 70m in length. Over the last 6 months the road has generated some 3,000kWh, a considerable amount of energy given the less than ideal conditions that these panels have found themselves in. Translating this figure into an annual number gives them around 70kWh per square meter per year which might not sound like much, indeed it’s inline with my “worst case” scenario when I first blogged about this last year (putting the payback time at ~15 years or so), but that’s energy that a regular road doesn’t create to offset its own cost of installation.
Like Solar Roadways the SolaRoad’s design is essentially a thick layer of protective glass above the solar panels which are then backed by a layer of rubber and concrete. Instead of the hexagonal tile design they’ve instead gone for flat panels which would appear to be more congruent road design although I’ll be the first to admit I’m not an expert in this field. By all accounts their design has stood the test of time, at least with the light load of cycling (although they claim it could handle a fire truck). The next stage for them would be to do a full scale replica on a road that sees a decent amount of traffic as whilst a cycleway is a good indication of how it will perform there’s nothing better than throwing the challenges of daily traffic volumes at it.
Unfortunately SolaRoad isn’t yet ready to release a potential price per kilometer installed however the entire program, including the research to design the coatings and the road itself, has come up to some $3.7 million euros. Considering that my original estimates pegged a competitive cost at around $1 million per kilometer I’d say that the trial has been a pretty good investment (unless you’d really want 4km worth of road somewhere instead…). That will ultimately be what determines if something like this can become a feasible alternative to our current asphalt road surfaces as the idea won’t get any traction if it’s noticeably more expensive than its traditional counterpart.
It’s good to see progress like this as it shows that the idea has some merit and definitely warrants further investigation. Whilst the power generation numbers might not be revolutionary there’s something to be said for a road that pays itself off over time, especially when that comes in the form of renewable energy. With further advances in grid technology and energy storage these roadways, in conjunction with other renewables, could form the basis of a fossil fuel free future. There’s a long way to go between today and that idyllic future but projects like this ensure that we keep making progress towards it.
I’ve spent the better part of the last 4 years banging on about how the hybrid cloud should be the goal that all cloud services work towards. Whilst the argument can be made that this might be born out of some protectionist feeling for on-premise infrastructure it’s more that I could never see large organisations fully giving up control of their infrastructure to the cloud. However the benefits of using the cloud, both in terms of its IaaS and PaaS capabilities, are undeniable and thus the ideal scenario is a blend between these two. Only one cloud provider has seriously considered this position, likely because of their large footprint in the enterprise space. Today Microsoft has launched the next stage in its cloud strategy: the Microsoft Azure Stack.
The Azure Stack appears to be an extension of the Azure Pack that Microsoft released a couple years ago, bringing many of the backend features that Microsoft itself uses to power the Azure Cloud to the enterprise. However whilst the Azure Pack was more of an interface that brought a whole lot of tools together the Azure Stack is its own set of technologies that elevates your current IT infrastructure with Azure features. As to what those features are exactly Microsoft isn’t being more specific than saying IaaS and PaaS currently although the latter indicates that some of the more juicy Azure features, like Table Storage, could potentially find their way into your datacenter.
The idealized hybrid cloud scenario that many have been talking about for years is an on-premise deployment that’s able to burst out to the cloud for additional resources when the need strikes. Whilst this was theoretically possible, if you invested the time to develop or customize your applications to take advantage of it, the examples of successful implementations were few and far between. The improvements that come with the Microsoft Azure Stack make such a scenario far more possible than it ever was before, allowing developers to create applications against a common platform that remains consistent no matter where the application finds itself running. At the same time supporting infrastructure applications can benefit from those same advantages, greatly reducing complexity in administering such an environment.
This comes hand in hand with the announcement of Microsoft Operations Manager which is essentially the interface to your on-premise cloud. Microsoft is positioning it as the one interface to rule them all as it’s capable of interfacing with all the major cloud providers as well as the various on-premise solutions that their competitors provide. The initial release will focus on 4 key features: Log Analytics, Security, Availability and Automation with more features to be coming at a “rapid pace” as the product matures. For me the most interesting features are the availability (apparently enabling a cloud restore of an application regardless of where it sits) and the automation stuff, but I’ll need to have a play with it first before I call out my favourite.
The Microsoft Azure Stack is by far the most exciting announcement to come out of Redmond in a long time as it shows they’re dedicated to providing the same experience to their enterprise customers as they currently deliver to their cloud counterparts. The cloud wall that has existed ever since the inception of the first cloud service is quickly breaking down, enabling enterprise IT to do far more than it ever could. This new Microsoft, which is undoubtedly being powered by Nadella’s focus on building upon the strong based he created in the Servers and Tools division, is one that its competitors should be wary of as they’re quickly eating everyone else’s lunch.
The problem that most renewables face is that they don’t generate power constantly, requiring some kind of energy storage medium to provide power when its not generating. Batteries are the first thing that comes to everyone’s mind when looking for such a device however the ones used for most home power applications aren’t anymore advanced than your typical car battery. Other methods of storing power, like pumped hydro or compressed air, are woefully inefficient shedding much of the generated power away in waste heat or in the process of converting it back to electricity when its needed. Many have tried to revolutionize this industry but few have made meaningful progress, that was until Tesla announced the Powerwall.
The Powerwall is an interesting device, essentially a 7KW (or 10KW, depending on your application) battery that mounts to your wall that can provide power to your house. Unlike traditional systems which were required to be constructed outside, due to the batteries producing hydrogen gas, the Powerwall can be mounted anywhere on your house. In a grid-connected scenario the Powerwall can store power during off-peak times and then release it during peak usage thereby reducing the cost of your energy consumption. The ideal scenario for it however is to be connected to a solar array on the roof, storing that energy for use later. All of this comes at the incredibly low price point of $3,000 for the 7KW model with the larger variant a mere $500 more. Suffice to say this product has the potential for some really revolutionary applications, not least of which is reducing our reliance on fossil fuel generated power.
The solar incentives that many countries have brought in over the last few years has seen an explosion in the number of houses with domestic solar arrays. This, in turn, has brought down the cost of getting solar installed to ridiculously low levels, even less than $1/watt installed in some cases. However with the end of the feed-in tariffs these panels are usually not economical with the feed-in rates usually below that of the retail rate. Using a Tesla Powerwall however would mean that this energy, which would otherwise be sold at a comparative loss, could be used when its needed. This would reduce the load on the grid whilst also improve the ROI of the panels and the Powerwall system, a win-win in anyone’s books.
It would be one thing if Tesla was just making another product however it seems that Elon Musk has a vision that extends far beyond just ripping the battery out of its cars and selling them as grid connected devices. The keynote speech he gave a few days ago is evidence of that and is worth the watch if you have the time:
In its current incarnation the Tesla Powerwall is a great device, one that will make energy storage feasible to a much wider consumer base. However I can’t help but feel that this is just Tesla’s beachhead into a much larger vision and that future revisions of the Powerwall product will likely bring even larger capacities for similar or lower prices. Indeed this is all coming to us before Tesla has completed their Gigafactory-1 which is predicted to reduce the cost of the batteries by some 30% with further iterations driving it down even more. Suffice to say I’m excited about this as it makes a fully renewable future not only inevitable, but tantalizingly close to reality.
Microsoft has been pursuing its unified platform strategy for some time now with admittedly mixed results. The infrastructure to build that kind of unified experience is there, and indeed Microsoft applications have demonstrated that it can be taken advantage of, but it really hasn’t spread to third party developers and integrators like they intended it to. A big part of this was the fact that their mobile offering, Windows Phone, is a very minor player that has been largely ignored by the developer community. Whilst its enterprise integration can’t be beaten the consumer experience, which is key to driving further adoption of the platform, has been severely lacking. Today Microsoft has announced a radical new approach to improving this by allowing iOS and Android apps to run as Universal Applications on the Windows platform.
The approach is slightly different between platforms however the final outcome is the same: applications written for the two current kings of the smartphone world can run as a universal application on supported Windows platforms. Android applications can be submitted in their native APK form and will then run in a para-virtualized environment (includes aspects of both emulation as well as direct subsystem integration). iOS applications on the other hand can, as of today, be compiled directly from Objective-C into Universal Applications that can be run on Windows Phones. Of course there will likely still be some effort required to get the UX inline but not having to maintain different core codebases will mean that the barriers to developing a cross platform app that includes Windows Phone will essentially drop to nothing.
Of course whether or not this will translate into more people jumping onto the Windows Phone ecosystem isn’t something I can readily predict. Windows Phone has been languishing in the single digit market share ever since its inception and all the changes that Microsoft has made to get that number up haven’t made a meaningful impact on it. Having a better app ecosystem will be a drawcard to those who like Microsoft but haven’t wanted to make the transition but this all relies on developers taking the time to release their applications on the Windows Phone platform. Making the dev experience easier is the first step to this but then it’s a chicken and egg problem of not having enough market share to make it attractive for both ends of the spectrum.
Alongside this Microsoft also announced the ability for web pages to use features of the Windows Phone platform, enabling them to become hosted web pages with enhanced functionality. It’s an interesting approach for enabling a richer web experience however it feels like something that should probably be a generalized standard rather than a proprietary tech that only works for one platform. Microsoft has shown that they’re willing to open up products like this now, something they never did in the past, so potentially this could just be the beachhead to see whether or not there’s any interest before they start pushing it to a wider audience.
This is definitely a great step in the right direction for Microsoft as anything they can do to reduce the barrier to supporting their ecosystem will go a long way to attracting more developers to their ecosystem. There’s still a ways to go to making their mobile platform a serious contender with the current big two but should this app portability program pay dividends then there’s real potential for them to start clawing back some of the market share they once had. It’s likely going to be some time before we know if this gamble will pay off for Microsoft but I think everyone can agree that they’re at least thinking along the right lines.
My Xperia Z managed to last almost 2 years before things started to go awry. Sure it wasn’t exactly a smooth road for the entire time I had the phone, what with the NFC update refusing to apply every time I rebooted my phone or the myriad of issues that plagued its Android 4.4 release, but it worked well enough that I was willing to let most of those problems slide. However the last month of its life saw its performance take a massive dive and no matter what I did to cajole it back to life it continued to spurt and stutter making for a rather frustrating experience. I had told myself that my next phone would be a stock Android experience so I could avoid any potential carrier or manufacturer issues and that left me with one option: the Nexus 6. I’ve had this phone for just over a month now and I have to say that I can’t see myself going back to a non-stock experience.
First things first: the size. When I moved to the Xperia Z I was blown away by how big it was and figured that anything bigger would just become unwieldy. Indeed when I pulled the Nexus 6 out of the box it certainly felt like a behemoth beside my current 5″ device however it didn’t take me long to grow accustomed to the size. I attribute this mostly to the subtle design features like the tapered edges and the small dimple on the back where the Motorola logo is which make the phone both feel thinner and more secure in the hand than its heft would suggest. I definitely appreciate the additional real estate (and the screen is simply gorgeous) although had the phone come in a 5″ variant I don’t think I’d be missing out on much. Still if the size was the only thing from holding you back on buying this handset I’d err on the side of taking the plunge as it quickly becomes a non-issue.
The 2 years since my last upgrade have seen a significant step up in the power that mobile devices are capable of delivering and the Nexus 6 is no exception in this regard. Under the hood it’s sporting a quad core 2.7GHz Qualcomm chip coupled with 3GB RAM and the latest Adreno GPU, the 420. Most of this power is required to drive the absolutely bonkers resolution of 2560 x 1440 which it does admirably for pretty much everything, even being able to play the recently ported Hearthstone relatively well. This is all backed by an enormous 3220mAh battery which seems more than capable of keeping this thing running all day, even when I forget that I’ve left tethering enabled (usually has about 20% left the morning after I’ve done that). The recent updates seem to have made some slight improvements to this but I didn’t have enough time before the updates came down to make a solid comparison.
Layered on top of this top end piece of silicon is the wonderful Android 5.1 (codename Lollipop) which, I’m glad to say, lives up to much of the hype that I had read about it before laying down the cash for the Nexus 6. The material design philosophy that Google has adopted for its flagship mobile operating system is just beautiful and with most of the big name applications adhering to it you get an experience that’s consistent throughout the Android ecosystem. Of course applications that haven’t yet updated their design stick out like a sore thumb, something which I can only hope will be a non-issue within a year or so. The lack of additional crapware also means that the experience across different system components doesn’t vary wildly, something which was definitely noticeable on the Xperia Z and my previous Android devices.
Indeed this is the first Android device that I’ve owned that just works, as opposed to my previous ones which always required a little bit of tinkering here or there to sand off the rough edges of either the vendor’s integration bits or the oddities of the current Android release of the time. The Nexus 6 with its stock 5.1 experience has required no such tweaking with my only qualm being that newly installed widgets weren’t available for use until I rebooted my phone. Apart from that the experience has been seamless from the initial set up (which, with NFC, was awesomely simple) all the way through my daily use through the last month.
The Nexus line of handsets always got a bad rap for the quality of the camera but, in all honesty, it seems about on par with my Xperia Z. This shouldn’t be surprising since they both came with one of the venerable Exmor chips from Sony which have a track history of producing high quality cameras for phones. The Google Camera software layered on top of it though is streets ahead of what Sony had provided, both in terms of functionality and performance. The HDR mode seems to actually work as advertised, as demonstrated above, being able to extract a lot more detail of a scene than I would’ve expected from a phone camera. Of course the tiny sensor size still means that low light performance isn’t its strong suit but I’ve long since moved past the point in my life where blurry pictures in a club were things I looked on fondly.
Overall I’m very impressed with the Google Nexus 6 as my initial apprehension had me worried that I’d end up regretting my purchase. I’m glad to say that’s not the case at all as my experience has been nothing short of stellar and has confirmed my suspicions that the only Android experience anyone should have is the stock one. Unfortunately that does limit your range of handsets severely but it does seem that more manufacturers are coming around to the idea of providing a stock Android experience, opening up the possibility of more handsets with the ideal software powering it. Whilst it might not be as cheap as other Nexus phones before it the Nexus 6 is most certainly worth the price of admission and I’d have no qualms about recommending it to other Android fans.
I remember when I travelled to the USA back in 2010 I figured that wifi was ubiquitous enough now that I probably wouldn’t have to worry about getting a data plan. Back then that was partly true, indeed I was able to do pretty much everything I needed to for the first two weeks before needing Internet on the go became something of a necessity. Thankfully that was easily fixed by getting a $70, prepaid plan from T-Mobile which had unlimited everything which was more than enough to cover the gap. Still that took a good few hours out of my day just to get that sorted and since then I’ve always wanted a universal mobile plan that didn’t cost me the Earth.
Today Google has announced just that.
Not to be confused with Google’s other similar endeavour Project Fi is a collaboration between Google and numerous cellular providers to give end users a single plan that will work for them across 120 countries. Fi enabled handsets, of which there are currently only one: the Nexus 6, are able to switch between wifi and a multitude of local cellular providers for calls, txts and, most important of all, data. This comes hand in hand with a bunch of other features like being able to check your voicemails through Google Hangouts as well as other nifty features like Google Voice. Suffice to say it sounds like a pretty terrific deal and, thankfully, remains so even when you include the pricing.
The base plan will set you back $20 which includes unlimited domestic calls (I’m assuming that means national), unlimited txts to anywhere and access to the wifi and cellular networks that are part of the service. From there you can add data onto your plan for the rate of $10 per GB which, whilst not exactly the cheapest plan around (What I currently get on Telstra for $95 would cost me $120 on Fi) does come with the added benefit of being charged in 100MB increments. So if you don’t use all your data cap by the end of the month you don’t get charged for it. The benefit here is, of course, that that data works across 120 countries than my current 1, something I would’ve made good use of back when I was travelling a lot for work.
Like many cool services however Fi will only be available to US residents to begin with as their coverage map doesn’t extend far past American border. This is most likely due to the first two providers they’ve partnered with, Sprint and T-Mobile, not having a presence elsewhere. However it looks pretty likely that Google will want to extend this partnership to carriers in other countries, mostly in the aims of reducing their underlying costs for providing data coverage overseas. The real kicker will be to see who they partner with in some countries as depending on who they choose the experience could be wildly different, something I’m sure they’re keen to avoid.
I don’t think I’d make the switch to Google Fi right now even if it was available, not at least until I had a few good reports on how their service compared to the other big providers. To be sure it’d definitely be something I’d like to have when I’m travelling especially now considering how much more I can get done on my phone compared to when I last spent a good chunk of time abroad. As my everytime provider though I’m not so sure as the features they’re currently offering aren’t enough to overcome the almost $30 price differential.
I’m sure that will change with time, however.
It’s sometimes hard to remember that smartphones are still a recent phenomenon with the first devices to be categorised as such being less than a decade old. Sure there were phones before that which you could say were smartphones but back then they were more an amalgam of a PDA and a phone more than a seamless blend between the two. Back then the landscape of handset providers was wildly different, one that was dominated by a single player: Nokia. Their failure to capitalize on the smartphone revolution is a testament to incumbents failing to react to innovative upstarts and their sale to Microsoft their admittance of their fault. You can then imagine my surprise when the now much smaller company is eyeing off a return to the smartphone market as pretty much everyone would agree the horse has long since bolted for Nokia.
The strategy is apparently being born out of the Nokia Technologies arm, the smallest branch out of the three that remained after the deal with Microsoft (the other two being its network devices and Here location division). This is the branch that holds Nokia’s 10,000 or so patents and so you’d think that they’d likely just be resting on their laurels and collecting patent fees for time immaterial. However this section has been somewhat busy at work having developed and licensed two products since the Microsoft deal. The first of which is z Launcher an Android launcher and the N1 a tablet which they’ve licensed out to another manufacturer whom they’ve also lent the Nokia brand name too. The expectation is that future Nokia devices will likely follow the latter’s model with Nokia doing most of the backend work but then offloading it to someone else to manufacture and ship.
There’s no doubt that Nokia had something of a cult following among Windows Phone users as they provided some of the best handsets for that platform. Their other smartphones however had no such following as their pursuit of their own mobile ecosystem made it extremely unappealing to developers who were already split between two major platforms. Had Nokia retained control of the Lumia brand I could see them having an inbuilt user base for a future smartphone, especially if came in an Android flavour, however that brand (and everything that backed it) went to Microsoft and so did all the loyalty that went with it. Nokia is essentially starting from scratch here and, unfortunately, that doesn’t bode well for the once king of the phone industry.
Coming in at that level you’re essentially competing with every other similarly specced handset out there and, to be honest, it’s a market that eats up competitors like that without too much hassle. The outsourcing of the actual manufacturing and distribution means that they don’t shoulder a lot of the risk that they used to with such designs however it also means they have little control over the final product that actually reaches consumers. That being said the N1 does look like a solid device but that doesn’t necessarily mean that future devices will share the same level of quality.
Nokia is going to have to do something to stand out from the pack and, frankly, without their brand loyalty behind them I’m struggling to see what they could do to claw back some of the market share they once had. There are innumerable companies now that have solid handset choices for nearly all sectors of the market and the Nokia brand name just doesn’t carry the weight it once did. If they’re seriously planning a return to the smartphone market they’re going to have to do much more than just make another handset, something which I’m not entirely sure the now slimmed down Nokia is capable of doing.