Technology

ingaas_finfet_500

Quantum Well Transistors Could Be Moore’s Saving Grace.

It seems that the semiconductor industry can’t go a year without someone raising the tired old flag that is the impending doom of Moore’s Law. Nearly every year there’s a group of people out to see it finally meet its end although to what purpose I could not tell you. However as an industry observer will tell you these predictions have, for the past 5 decades, proved to be incorrect as any insurmountable barrier is usually overcome when the requisite billions are thrown at the problem. However we are coming to a point where our reigning champion behind Moore’s Law, namely planar transistors built on silicon, is starting to reach the end of its life and thus we have been searching for its ultimate replacement. Whilst it seems inevitable that a new material will become the basis upon which we build our new computing empire the question of how that material will be shaped is still unanswered, but there are rumblings of what may come.

ingaas_finfet_500

For the vast majority of computing devices out there the transistors underneath the hood are created in a planar fashion, I.E. they essentially exist in a 2 dimensional space. In terms of manufacturing this has many advantages and the advances we’ve made in planar technology over the years have seen us break through many barriers that threatened to kill Moore’s Law in its tracks. Adding in that additional dimension however is no trivial task and whilst it’s not beyond our capability to do, indeed my computer is powered by a component that makes use of a 3D manufacturing process, but applying it to something as complicated as a CPU requires an incredible amount of effort. However the benefits of doing so are proving to be many and the transistor pictured above, called a Quantum Well Field Effect Transistor (QWFET), could be the ram with which we break through the next barrier to escalating Moore’s Law.

The main driver behind progress in the CPU market comes from making transistors ever-smaller, something which allows us to pack more of them in the same space whilst also giving us benefits like reduced power consumption. However as we get smaller issues that could be ignored, like gate leakage back when we were still at the 45nm stage, start to become fundamental blockers to progress. Right now, as we approach sizes below 10nm, that same problem is starting to rear its head again and we need to look at innovative solutions to tackle it. The QWFET is one such solution as it has the potential to eliminate the leakage problem whilst allowing us to continue our die shrinking ways.

QWFETs are essentially an extension of Intel’s current FinFET technology. In the current FinFETs electrons are bounded on 3 sides which is what helped Intel make their current die shrink workable (although it has taken them much longer than expected to get the yeilds right). In QWFETs the electrons are bounded on an additional side which forms a quantum well inside the transistor. This drastically reduces the leakage which would otherwise plague a transistor of a sub-10nm size and, as a benefit, significantly reduces power draw as the static power usage drops considerably.

This does sound good in principle and would be easy to write off as hot air had Intel not been working on it since at least 2010. Some of their latest research points to these kinds of transistors being the way forward all the way down to 5nm which would keep Moore’s Law trucking along for quite some time considering we’re just on the cusp of 14nm products hitting our shelves. Of course this is all speculative at this time however there’s a lot of writing on the wall that’s pointing to this as being the way forward. If this turns out to not be the case then I’d be very interested to see what Intel had up their sleeves as it’d have to be something even more revolutionary than this.

Either way it’l be great for us supporters of Moore’s Law and, of course, users of computers in general.

DSC8910_kinderenvanboven2

SolaRoad Demonstrates Working Solar Roads.

After the hubbub that Solar Freakin Roadways caused last year (ranging in tone from hopeful to critical) all seemed to have gone quiet on the potentially revolutionary road surface front. I don’t think anyone expected us to be laying these things down en-masse once the Indiegogo campaign finished but I’ve been surprise that I hadn’t heard more about them in the year that’s gone by. Whilst Solar Roadways might not have been announcing their progress from the rooftops there has been some definitive movement in this space, coming to us from a Dutch company called SolaRoads. Their test track, which was installed some 6 months ago, has proven to be wildly successful which gives a lot of credibility to an idea that some saw as just an elaborate marketing campaign.

DSC8910_kinderenvanboven2

The road was constructed alongside a bike path totalling about 70m in length. Over the last 6 months the road has generated some 3,000kWh, a considerable amount of energy given the less than ideal conditions that these panels have found themselves in. Translating this figure into an annual number gives them around 70kWh per square meter per year which might not sound like much, indeed it’s inline with my “worst case” scenario when I first blogged about this last year (putting the payback time at ~15 years or so), but that’s energy that a regular road doesn’t create to offset its own cost of installation.

Like Solar Roadways the SolaRoad’s design is essentially a thick layer of protective glass above the solar panels which are then backed by a layer of rubber and concrete. Instead of the hexagonal tile design they’ve instead gone for flat panels which would appear to be more congruent road design although I’ll be the first to admit I’m not an expert in this field. By all accounts their design has stood the test of time, at least with the light load of cycling (although they claim it could handle a fire truck). The next stage for them would be to do a full scale replica on a road that sees a decent amount of traffic as whilst a cycleway is a good indication of how it will perform there’s nothing better than throwing the challenges of daily traffic volumes at it.

Unfortunately SolaRoad isn’t yet ready to release a potential price per kilometer installed however the entire program, including the research to design the coatings and the road itself, has come up to some $3.7 million euros. Considering that my original estimates pegged a competitive cost at around $1 million per kilometer I’d say that the trial has been a pretty good investment (unless you’d really want 4km worth of road somewhere instead…). That will ultimately be what determines if something like this can become a feasible alternative to our current asphalt road surfaces as the idea won’t get any traction if it’s noticeably more expensive than its traditional counterpart.

It’s good to see progress like this as it shows that the idea has some merit and definitely warrants further investigation. Whilst the power generation numbers might not be revolutionary there’s something to be said for a road that pays itself off over time, especially when that comes in the form of renewable energy. With further advances in grid technology and energy storage these roadways, in conjunction with other renewables, could form the basis of a fossil fuel free future. There’s a long way to go between today and that idyllic future but projects like this ensure that we keep making progress towards it.

Microsoft Azure Stack

Microsoft’s Azure Stack: It’s Finally Here.

I’ve spent the better part of the last 4 years banging on about how the hybrid cloud should be the goal that all cloud services work towards. Whilst the argument can be made that this might be born out of some protectionist feeling for on-premise infrastructure it’s more that I could never see large organisations fully giving up control of their infrastructure to the cloud. However the benefits of using the cloud, both in terms of its IaaS and PaaS capabilities, are undeniable and thus the ideal scenario is a blend between these two. Only one cloud provider has seriously considered this position, likely because of their large footprint in the enterprise space. Today Microsoft has launched the next stage in its cloud strategy: the Microsoft Azure Stack.

Microsoft Azure Stack

The Azure Stack appears to be an extension of the Azure Pack that Microsoft released a couple years ago, bringing many of the backend features that Microsoft itself uses to power the Azure Cloud to the enterprise. However whilst the Azure Pack was more of an interface that brought a whole lot of tools together the Azure Stack is its own set of technologies that elevates your current IT infrastructure with Azure features. As to what those features are exactly Microsoft isn’t being more specific than saying IaaS and PaaS currently although the latter indicates that some of the more juicy Azure features, like Table Storage, could potentially find their way into your datacenter.

The idealized hybrid cloud scenario that many have been talking about for years is an on-premise deployment that’s able to burst out to the cloud for additional resources when the need strikes. Whilst this was theoretically possible, if you invested the time to develop or customize your applications to take advantage of it, the examples of successful implementations were few and far between. The improvements that come with the Microsoft Azure Stack make such a scenario far more possible than it ever was before, allowing developers to create applications against a common platform that remains consistent no matter where the application finds itself running. At the same time supporting infrastructure applications can benefit from those same advantages, greatly reducing complexity in administering such an environment.

This comes hand in hand with the announcement of Microsoft Operations Manager which is essentially the interface to your on-premise cloud. Microsoft is positioning it as the one interface to rule them all as it’s capable of interfacing with all the major cloud providers as well as the various on-premise solutions that their competitors provide. The initial release will focus on 4 key features: Log Analytics, Security, Availability and Automation with more features to be coming at a “rapid pace” as the product matures. For me the most interesting features are the availability (apparently enabling a cloud restore of an  application regardless of where it sits) and the automation stuff, but I’ll need to have a play with it first before I call out my favourite.

The Microsoft Azure Stack is by far the most exciting announcement to come out of Redmond in a long time as it shows they’re dedicated to providing the same experience to their enterprise customers as they currently deliver to their cloud counterparts. The cloud wall that has existed ever since the inception of the first cloud service is quickly breaking down, enabling enterprise IT to do far more than it ever could. This new Microsoft, which is undoubtedly being powered by Nadella’s focus on building upon the strong based he created in the Servers and Tools division, is one that its competitors should be wary of as they’re quickly eating everyone else’s lunch.

Tesla Powerwall

Tesla’s Powerwall: Home Power Revolutionized.

The problem that most renewables face is that they don’t generate power constantly, requiring some kind of energy storage medium to provide power when its not generating. Batteries are the first thing that comes to everyone’s mind when looking for such a device however the ones used for most home power applications aren’t anymore advanced than your typical car battery. Other methods of storing power, like pumped hydro or compressed air, are woefully inefficient shedding much of the generated power away in waste heat or in the  process of converting it back to electricity when its needed. Many have tried to revolutionize this industry but few have made meaningful progress, that was until Tesla announced the Powerwall.

Tesla Powerwall

The Powerwall is an interesting device, essentially a 7KW (or 10KW, depending on your application) battery that mounts to your wall that can provide power to your house. Unlike traditional systems which were required to be constructed outside, due to the batteries producing hydrogen gas, the Powerwall can be mounted anywhere on your house. In a grid-connected scenario the Powerwall can store power during off-peak times and then release it during peak usage thereby reducing the cost of your energy consumption. The ideal scenario for it however is to be connected to a solar array on the roof, storing that energy for use later. All of this comes at the incredibly low price point of $3,000 for the 7KW model with the larger variant a mere $500 more. Suffice to say this product has the potential for some really revolutionary applications, not least of which is reducing our reliance on fossil fuel generated power.

The solar incentives that many countries have brought in over the last few years has seen an explosion in the number of houses with domestic solar arrays. This, in turn, has brought down the cost of getting solar installed to ridiculously low levels, even less than $1/watt installed in some cases. However with the end of the feed-in tariffs these panels are usually not economical with the feed-in rates usually below that of the retail rate. Using a Tesla Powerwall however would mean that this energy, which would otherwise be sold at a comparative loss, could be used when its needed. This would reduce the load on the grid whilst also improve the ROI of the panels and the Powerwall system, a win-win in anyone’s books.

It would be one thing if Tesla was just making another product however it seems that Elon Musk has a vision that extends far beyond just ripping the battery out of its cars and selling them as grid connected devices. The keynote speech he gave a few days ago is evidence of that and is worth the watch if you have the time:

In its current incarnation the Tesla Powerwall is a great device, one that will make energy storage feasible to a much wider consumer base. However I can’t help but feel that this is just Tesla’s beachhead into a much larger vision and that future revisions of the Powerwall product will likely bring even larger capacities for similar or lower prices. Indeed this is all coming to us before Tesla has completed their Gigafactory-1 which is predicted to reduce the cost of the batteries by some 30% with further iterations driving it down even more. Suffice to say I’m excited about this as it makes a fully renewable future not only inevitable, but tantalizingly close to reality.

onewindows

Windows Universal Apps to Enable Porting from iOS/Android.

Microsoft has been pursuing its unified platform strategy for some time now with admittedly mixed results. The infrastructure to build that kind of unified experience is there, and indeed Microsoft applications have demonstrated that it can be taken advantage of, but it really hasn’t spread to third party developers and integrators like they intended it to. A big part of this was the fact that their mobile offering, Windows Phone, is a very minor player that has been largely ignored by the developer community. Whilst its enterprise integration can’t be beaten the consumer experience, which is key to driving further adoption of the platform, has been severely lacking. Today Microsoft has announced a radical new approach to improving this by allowing iOS and Android apps to run as Universal Applications on the Windows platform.

onewindows

The approach is slightly different between platforms however the final outcome is the same: applications written for the two current kings of the smartphone world can run as a universal application on supported Windows platforms. Android applications can be submitted in their native APK form and will then run in a para-virtualized environment (includes aspects of both emulation as well as direct subsystem integration). iOS applications on the other hand can, as of today, be compiled directly from Objective-C into Universal Applications that can be run on Windows Phones. Of course there will likely still be some effort required to get the UX inline but not having to maintain different core codebases will mean that the barriers to developing a cross platform app that includes Windows Phone will essentially drop to nothing.

Of course whether or not this will translate into more people jumping onto the Windows Phone ecosystem isn’t something I can readily predict. Windows Phone has been languishing in the single digit market share ever since its inception and all the changes that Microsoft has made to get that number up haven’t made a meaningful impact on it. Having a better app ecosystem will be a drawcard to those who like Microsoft but haven’t wanted to make the transition but this all relies on developers taking the time to release their applications on the Windows Phone platform. Making the dev experience easier is the first step to this but then it’s a chicken and egg problem of not having enough market share to make it attractive for both ends of the spectrum.

Alongside this Microsoft also announced the ability for web pages to use features of the Windows Phone platform, enabling them to become hosted web pages with enhanced functionality. It’s an interesting approach for enabling a richer web experience however it feels like something that should probably be a generalized standard rather than a proprietary tech that only works for one platform. Microsoft has shown that they’re willing to open up products like this now, something they never did in the past, so potentially this could just be the beachhead to see whether or not there’s any interest before they start pushing it to a wider audience.

This is definitely a great step in the right direction for Microsoft as anything they can do to reduce the barrier to supporting their ecosystem will go a long way to attracting more developers to their ecosystem. There’s still a ways to go to making their mobile platform a serious contender with the current big two but should this app portability program pay dividends then there’s real potential for them to start clawing back some of the market share they once had. It’s likely going to be some time before we know if this gamble will pay off for Microsoft but I think everyone can agree that they’re at least thinking along the right lines.

Nexus 6 Box

Nexus 6: Stock Android is the Only Way to Fly.

My Xperia Z managed to last almost 2 years before things started to go awry. Sure it wasn’t exactly a smooth road for the entire time I had the phone, what with the NFC update refusing to apply every time I rebooted my phone or the myriad of issues that plagued its Android 4.4 release, but it worked well enough that I was willing to let most of those problems slide. However the last month of its life saw its performance take a massive dive and no matter what I did to cajole it back to life it continued to spurt and stutter making for a rather frustrating experience. I had told myself that my next phone would be a stock Android experience so I could avoid any potential carrier or manufacturer issues and that left me with one option: the Nexus 6. I’ve had this phone for just over a month now and I have to say that I can’t see myself going back to a non-stock experience.

Nexus 6 Box

First things first: the size. When I moved to the Xperia Z I was blown away by how big it was and figured that anything bigger would just become unwieldy. Indeed when I pulled the Nexus 6 out of the box it certainly felt like a behemoth beside my current 5″ device however it didn’t take me long to grow accustomed to the size. I attribute this mostly to the subtle design features like the tapered edges and the small dimple on the back where the Motorola logo is which make the phone both feel thinner and more secure in the hand than its heft would suggest. I definitely appreciate the additional real estate (and the screen is simply gorgeous) although had the phone come in a 5″ variant I don’t think I’d be missing out on much. Still if the size was the only thing from holding you back on buying this handset I’d err on the side of taking the plunge as it quickly becomes a non-issue.

The 2 years since my last upgrade have seen a significant step up in the power that mobile devices are capable of delivering and the Nexus 6 is no exception in this regard. Under the hood it’s sporting a quad core 2.7GHz Qualcomm chip coupled with 3GB RAM and the latest Adreno GPU, the 420. Most of this power is required to drive the absolutely bonkers resolution of 2560 x 1440 which it does admirably for pretty much everything, even being able to play the recently ported Hearthstone relatively well. This is all backed by an enormous 3220mAh battery which seems more than capable of keeping this thing running all day, even when I forget that I’ve left tethering enabled (usually has about 20% left the morning after I’ve done that). The recent updates seem to have made some slight improvements to this but I didn’t have enough time before the updates came down to make a solid comparison.

Nexus 6

Layered on top of this top end piece of silicon is the wonderful Android 5.1 (codename Lollipop) which, I’m glad to say, lives up to much of the hype that I had read about it before laying down the cash for the Nexus 6. The material design philosophy that Google has adopted for its flagship mobile operating system is just beautiful and with most of the big name applications adhering to it you get an experience that’s consistent throughout the Android ecosystem. Of course applications that haven’t yet updated their design stick out like a sore thumb, something which I can only hope will be a non-issue within a year or so. The lack of additional crapware also means that the experience across different system components doesn’t vary wildly, something which was definitely noticeable on the Xperia Z and my previous Android devices.

Indeed this is the first Android device that I’ve owned that just works, as opposed to my previous ones which always required a little bit of tinkering here or there to sand off the rough edges of either the vendor’s integration bits or the oddities of the current Android release of the time. The Nexus 6 with its stock 5.1 experience has required no such tweaking with my only qualm being that newly installed widgets weren’t available for use until I rebooted my phone. Apart from that the experience has been seamless from the initial set up (which, with NFC, was awesomely simple) all the way through my daily use through the last month.

IMG_20150410_173629

The Nexus line of handsets always got a bad rap for the quality of the camera but, in all honesty, it seems about on par with my Xperia Z. This shouldn’t be surprising since they both came with one of the venerable Exmor chips from Sony which have a track history of producing high quality cameras for phones. The Google Camera software layered on top of it though is streets ahead of what Sony had provided, both in terms of functionality and performance. The HDR mode seems to actually work as advertised, as demonstrated above, being able to extract a lot more detail of a scene than I would’ve expected from a phone camera. Of course the tiny sensor size still means that low light performance isn’t its strong suit but I’ve long since moved past the point in my life where blurry pictures in a club were things I looked on fondly.

Overall I’m very impressed with the Google Nexus 6 as my initial apprehension had me worried that I’d end up regretting my purchase. I’m glad to say that’s not the case at all as my experience has been nothing short of stellar and has confirmed my suspicions that the only Android experience anyone should have is the stock one. Unfortunately that does limit your range of handsets severely but it does seem that more manufacturers are coming around to the idea of providing a stock Android experience, opening up the possibility of more handsets with the ideal software powering it. Whilst it might not be as cheap as other Nexus phones before it the Nexus 6 is most certainly worth the price of admission and I’d have no qualms about recommending it to other Android fans.

Google Project Fi

Google’s Project Fi: Breaking Down Communication Barriers.

I remember when I travelled to the USA back in 2010 I figured that wifi was ubiquitous enough now that I probably wouldn’t have to worry about getting a data plan. Back then that was partly true, indeed I was able to do pretty much everything I needed to for the first two weeks before needing Internet on the go became something of a necessity. Thankfully that was easily fixed by getting a $70, prepaid plan from T-Mobile which had unlimited everything which was more than enough to cover the gap. Still that took a good few hours out of my day just to get that sorted and since then I’ve always wanted a universal mobile plan that didn’t cost me the Earth.

Today Google has announced just that.

Google Project Fi

Not to be confused with Google’s other similar endeavour Project Fi is a collaboration between Google and numerous cellular providers to give end users a single plan that will work for them across 120 countries. Fi enabled handsets, of which there are currently only one: the Nexus 6, are able to switch between wifi and a multitude of local cellular providers for calls, txts and, most important of all, data. This comes hand in hand with a bunch of other features like being able to check your voicemails through Google Hangouts as well as other nifty features like Google Voice. Suffice to say it sounds like a pretty terrific deal and, thankfully, remains so even when you include the pricing.

The base plan will set you back $20 which includes unlimited domestic calls (I’m assuming that means national), unlimited txts to anywhere and access to the wifi and cellular networks that are part of the service. From there you can add data onto your plan for the rate of $10 per GB which, whilst not exactly the cheapest plan around (What I currently get on Telstra for $95 would cost me $120 on Fi) does come with the added benefit of being charged in 100MB increments. So if you don’t use all your data cap by the end of the month you don’t get charged for it. The benefit here is, of course, that that data works across 120 countries than my current 1, something I would’ve made good use of back when I was travelling a lot for work.

Like many cool services however Fi will only be available to US residents to begin with as their coverage map doesn’t extend far past American border. This is most likely due to the first two providers they’ve partnered with, Sprint and T-Mobile, not having a presence elsewhere. However it looks pretty likely that Google will want to extend this partnership to carriers in other countries, mostly in the aims of reducing their underlying costs for providing data coverage overseas. The real kicker will be to see who they partner with in some countries as depending on who they choose the experience could be wildly different, something I’m sure they’re keen to avoid.

I don’t think I’d make the switch to Google Fi right now even if it was available, not at least until I had a few good reports on how their service compared to the other big providers. To be sure it’d definitely be something I’d like to have when I’m travelling especially now considering how much more I can get done on my phone compared to when I last spent a good chunk of time abroad. As my everytime provider though I’m not so sure as the features they’re currently offering aren’t enough to overcome the almost $30 price differential.

I’m sure that will change with time, however.

Nokia-Logo

Nokia’s Return Too Little, Far Too Late.

It’s sometimes hard to remember that smartphones are still a recent phenomenon with the first devices to be categorised as such being less than a decade old. Sure there were phones before that which you could say were smartphones but back then they were more an amalgam of a PDA and a phone more than a seamless blend between the two. Back then the landscape of handset providers was wildly different, one that was dominated by a single player: Nokia. Their failure to capitalize on the smartphone revolution is a testament to incumbents failing to react to innovative upstarts and their sale to Microsoft their admittance of their fault. You can then imagine my surprise when the now much smaller company is eyeing off a return to the smartphone market as pretty much everyone would agree the horse has long since bolted for Nokia.

Nokia-Logo

The strategy is apparently being born out of the Nokia Technologies arm, the smallest branch out of the three that remained after the deal with Microsoft (the other two being its network devices and Here location division). This is the branch that holds Nokia’s 10,000 or so patents and so you’d think that they’d likely just be resting on their laurels and collecting patent fees for time immaterial. However this section has been somewhat busy at work having developed and licensed two products since the Microsoft deal. The first of which is z Launcher an Android launcher and the N1 a tablet which they’ve licensed out to another manufacturer whom they’ve also lent the Nokia brand name too. The expectation is that future Nokia devices will likely follow the latter’s model with Nokia doing most of the backend work but then offloading it to someone else to manufacture and ship.

There’s no doubt that Nokia had something of a cult following among Windows Phone users as they provided some of the best handsets for that platform. Their other smartphones however had no such following as their pursuit of their own mobile ecosystem made it extremely unappealing to developers who were already split between two major platforms. Had Nokia retained control of the Lumia brand I could see them having an inbuilt user base for a future smartphone, especially if came in an Android flavour, however that brand (and everything that backed it) went to Microsoft and so did all the loyalty that went with it. Nokia is essentially starting from scratch here and, unfortunately, that doesn’t bode well for the once king of the phone industry.

Coming in at that level you’re essentially competing with every other similarly specced handset out there and, to be honest, it’s a market that eats up competitors like that without too much hassle. The outsourcing of the actual manufacturing and distribution means that they don’t shoulder a lot of the risk that they used to with such designs however it also means they have little control over the final product that actually reaches consumers. That being said the N1 does look like a solid device but that doesn’t necessarily mean that future devices will share the same level of quality.

Nokia is going to have to do something to stand out from the pack and, frankly, without their brand loyalty behind them I’m struggling to see what they could do to claw back some of the market share they once had. There are innumerable companies now that have solid handset choices for nearly all sectors of the market and the Nokia brand name just doesn’t carry the weight it once did. If they’re seriously planning a return to the smartphone market they’re going to have to do much more than just make another handset, something which I’m not entirely sure the now slimmed down Nokia is capable of doing.

IBM-2-of-4

Chef Watson: Big Data Might Finally be Usable.

The promise of Big Data has been courting many a CIO for years now, the allure being that all the data they have on everything can be fed into some giant engine that will then spit out insights for them. However like all things the promise and the reality are vastly different beasts and whilst there are examples of Big Data providing never before seen insights it hasn’t really revolutionized industries in the way other technologies have. A big part of that is that Big Data tools aren’t push button solutions, requiring a deep understanding of data science in order to garner the insights you seek. IBM’s Watson however is a much more general purpose engine, one that I believe could potentially deliver on the promises that its other Big Data compatriots have made.

IBM-2-of-4

The problem I see with most Big Data solutions is that they’re not generalizable, I.E. a solution that’s developed for a specific data set (say a logistics company wanting to know how long it takes a package to get from one place to another) will likely not be applicable anywhere else. This means whilst you have the infrastructure and capability to generate insights the investment required to attain them needs to be reapplied every time you want to look at the data in a different way or if you have other data that requires similar insights to be derived from it. Watson on the other hand falls more into the category of a general purpose data engine that can ingest all sorts of data and provide meaningful insights, even to things you wouldn’t expect like helping to author a cookbook.

The story behind how that came about is particularly interesting as it showed what I feel is the power of Big Data without the required need to have a data science degree to exploit it. Essentially Watson was fed with over 9000 (ha!) recipes from Bon Appétit‘s database which was then supplemented with the knowledge it has around flavour profiles. It then used all this information to derive new combinations that you wouldn’t typically think of and then provided them back to the chefs to prepare. Compared to traditional recipes the ingredient lists that Watson provided were much longer and involved however the results (which should be mostly attributed to the chefs preparing them) were well received showing that Watson did provide insight that would otherwise have been missed.

That’d just be an impressive demonstration of data science if it wasn’t for the fact that Watson is now being used to provide similar levels of insight across a vast number of industries from medical to online shopping to even matching remote workers with employers seeking their skills. Whilst it’s far short of what most people would class as a general AI (it’s more akin to a highly flexible expert system on the data it’s provided)  Watson has shown that it can be fed a wide variety of data sets and can then be queried in a relatively straightforward way. It’s that last part that I believe is the secret sauce to making Big Data usable and it could be the next big thing for IBM.

Whether or not they can capitalize on that though is what will determine if Watson becomes the one Big Data platform to rule them all or simply an interesting footnote in the history of expert systems. Watson has already proven its capabilities numerous times over so fundamentally it’s ready to go and the responsibility now resides with IBM to make sure it gets in the right hands to further develop it. Watson’s presence is growing slowly but I’m sure a killer app isn’t too far off for it.

saupload_775px_supercapacitors_chart.svg

The New Battery Tech Conundrum.

The batteries in our portable devices never seem to be big enough, in fact in some cases they seem to be getting worse. Gone are the days when forgetting to charge your phone for days at a time wasn’t an issue and you’ll be lucky to get a full day’s worth of use out of your laptop before it starts screaming to be plugged back into the wall. The cold hard fact of the matter is that storing electrical energy in a portable fashion is hard as energy density is often a function of surface area, meaning those lovely slim smartphones you love are at odds with increasing their battery life. Of course there are always improvements to be made however many breakthroughs in one aspect or another usually come at the cost of something else.

saupload_775px_supercapacitors_chart.svgTake for instance the latest announcement to come out of Stanford University which shows a battery that can be fully charged in under a minute and, if its creators are to be believed, replace the current battery tech that powers all our modern devices. Their battery is based on a technology called aluminium ion which works in a very similar way to the current lithium ion technology that’s behind most rechargeable devices. It’s hard to deny the list of advantages that their battery tech has: cheaper components, safer operation and, of course the fast charging times, however those advantages start to look a lot less appealing when you see the two disadvantages that they currently have to work past.

The voltage and energy density.

As the battery tech stands now the usable voltage the battery is able to put out is around 2 volts which is about half the voltage that most devices currently use. Sure you could get around this by using various tricks (DC step up converter, batteries in series, etc.) however these all reduce the efficiency of your battery and add complexity to the device you put them in. Thus if these kinds of batteries are going to be used as drop in replacements for the current lithium ion tech they’re going to have to work out how to up the voltage significantly without impacting heavily on the other aspects that make it desirable.

The latter problem is the more difficult one and is something that all new battery technology struggles with. With any battery tech you’re usually balancing quite a few factors in order to make the best tradeoffs for your particular use case however one of the most typical is between charge times and the amount of power you can store. In general the quicker your battery can charge the less dense it is energywise, meaning that fast charge time comes at the cost of usable life once it’s off the charger. Indeed this is exactly the issue that the new aluminium ion battery is struggling with as its current power density does not match that of lithium ion.

Now this isn’t to say that the idea is worthless, more that when you hear about these amazing kinds of batteries or supercapacitors (different kind of technology, but they’re an energy storage medium all the same) that have some kind of revolutionary property your first reaction should be to ask what the trade offs were. There’s a reason why sealed lead acid, nickel metal hydride and other seemingly ancient battery technologies are still used the world over; they’re perfect at doing the job they’ve found themselves in. Whilst charging your phone in a minute might be a great thing on paper if that came with a battery life that was a mere 20% of its long charging competitors I’m sure most people would choose the latter. Hopefully the researchers can overcome their current drawbacks to make something truly revolutionary but I’ll stay skeptical until proven otherwise.