Technology

Alienware Graphics Amplifier

External GPUs are a Solution in Search of a Problem.

If you’re a long time PC gamer chances are that you’ve considered getting yourself a gaming laptop at one point or another. The main attraction from such a device is portability, especially back in the heydays of LANs where steel cases and giant CRTs were a right pain to lug around. However they always came at a cost, both financially and opportunity as once you bought yourself a gaming laptop you were locked into those specs until you bought yourself another one. Alienware, a longtime manufacturer of gaming laptops, has cottoned onto this issue and has developed what they’re calling the Graphics Amplifier in order to bring desktop level grunt and upgradeability to their line of laptops.

Alienware Graphics Amplifier

On the surface it looks like a giant external hard drive but inside it are all the components required to run any PCIe graphics card. It contains a small circuit board with a PCIe x16 slot, a 450W power supply and a host of other connections because why not. There’s no fans or anything else to speak of however so you’re going to want to get a card with a blower style fan system on it, something which you’ll only see on reference cards these days. This then connects back to an Alienware laptop through a proprietary connection (unfortunately) which then allows the graphics card to act as if it’s installed in the system. The enclosure retails for about $300 without the graphics card included in it which means you’re up for about $600+ if you’re going to buy one for it. That’s certainly not out of reach for those who are already investing $1800+ in the requisite laptop but it’s certainly enough to make you reconsider the laptop purchase in the first place.

You see whilst this external case does appear to work as advertised (judging by the various articles that have popped up with it) it essentially removes the most attractive thing about having a gaming capable laptop: the portability. Sure this is probably more portable than a mini tower and a monitor but at the same time this case is likely to weigh more than the laptop itself and won’t fit into your laptop carry bag. The argument could be made that you wouldn’t need to take this with you, this is only for home use or something, but even then I’d argue you’d likely be better off with a gaming desktop and some slim, far more portable laptop to take with you (both of which could be had for the combined cost of this and the laptop).

Honestly though the days have long since passed when it was necessary to upgrade your hardware on a near yearly basis in order to be able to play the latest games. My current rig is well over 3 years old now and is still quite capable of playing all current releases, even if I have to dial back a setting or two on occasion. With that in mind you’d be better off spending the extra cash that you’d sink into this device plus the graphics card into the actual laptop itself which would likely net you the same overall performance. Then, when the laptop finally starts to show its age, you’ll likely be in the market for a replacement anyway.

I’m sure there’ll be a few people out there who’ll find some value in a device like this but honestly I just can’t see it. Sure it’s a cool piece of technology, a complete product where there’s only been DIY solutions in the past, but it’s uses are extremely limited and not likely to appeal to those who it’ll be marketed too. Indeed it feels much like Razer’s modular PC project, a cool idea that just simply won’t have a market to sell its product to. It’ll be interesting to see if this catches on though but since Alienware are the first (and only) company to be doing this I don’t have a high hopes.

Alan Eustace Record Breaking Jump

Google VP Alan Eustace Breaks Baumgartner’s Record.

It was just over 2 years ago that Felix Baumgartner leapt from the Red Bull Stratos capsule from a height of 39KMs above the Earth’s atmosphere, breaking a record that had stood for over 50 years. The amount of effort that went into creating that project left many, including myself, thinking that Baumgartner’s record would stand for a pretty long time as few have the resources and desire to do something of that nature. However as it turns out one of Google’s Senior Vice Presidents, Alan Eustace, had been working on breaking that record in secret for the past 3 years and on Friday last week he descended to Earth from a height of 135,890 feet (41.4KM), shattering Baumgartner’s record by an incredible 7,000 feet.

Alan Eustace Record Breaking Jump

The 2 jumps could not be more different, both technically and generally. For starters the Red Bull Stratos project was primarily a marketing exercise for Red Bull, the science that happened on the side was just a benefit for the rest of us. Eustace’s project on the other hand was done primarily in secret, with him eschewing any help from Google in order to avoid it becoming a marketing event. Indeed I don’t think anyone bar those working on the project knew that this was coming and the fact that they managed to achieve what Stratos did with a fraction of the funding speaks volumes to the team Eustace created to achieve this.

Looking at the above picture, which shows Eustace dangling from a tenuous tether as he ascends upwards, it’s plain to see that their approach was radically different to Stratos. Instead of building a capsule to transport Eustace, like Stratos and Kittinger’s project both did, they instead went for a direct tether to his pressure suit. This meant he spent the long journey skywards dangling face down which, whilst being nightmare material for some, would’ve given him an unparalleled view of the Earth disappearing from him. It also means that the load the balloon needed to carry was greatly reduced by comparison which likely allowed him to ascend much quicker.

Indeed the whole set up is incredibly bare bones with Eustace’s suit lacking many of the ancillary systems that Baumgartner’s had. One that amazed me was the lack of any kind of cooling system, something which meant that any heat he generated would stick around for an uncomfortably long period of time. To get around this he essentially remained motionless for the entire ascent, responding to ground control by moving one of this legs which they could monitor on camera. They did include a specially developed kind of parachute though, called Saber, which ensured that he didn’t suffer from the same control issues that Baumgartner did during his descent.

It’s simply astounding how Eustace and his team managed to achieve this, given their short time frame and comparatively limited budget. I’m also wildly impressed that they managed to keep this whole thing a secret for that period of time too as it would’ve been very easy for them to overshadow the Stratos project, especially given some of the issues they encountered. Whilst we might not all be doing high altitude jumps any time soon the technology behind this could find its way into safety systems in the coming generation of private space flight vehicles, something they will all need in no short order.

Windows 10 Logo

Windows 10 Brings Vastly Improved Security.

Windows has always had a troubled relationship with security. As the most popular desktop operating system it’s frequently the target of all sorts of weird and wonderful attacks which, to Microsoft’s credit, they’ve done their best to combat. However it’s hard to forget the numerous missteps along the way like the abhorrent User Access Control system which, in its default state, did little to improve security and just added another level of frustration for users. However if the features coming from the technical preview of Windows 10 are anything to go by Microsoft might finally be making big boy steps towards improving security on their flagship OS.

Windows 10 Logo

Whilst there’s numerous third party solutions to 2 factor authentication on Windows, like smartcards or tokens, the OS itself has never had that capability natively. This means that for the vast majority of Windows users this heightened security mode has been unavailable. Windows 10 brings with it the Next Generation Credentials service which allows users (both consumer and corporate) the ability to enrol a device to function as a second factor for authentication. The larger mechanics of how this work are still being worked out however the application has a PIN which would prevent unauthorized access to the code, ensuring that losing your device doesn’t mean someone automatically gains access to your Windows login. Considering this kind of technology has been freely available for years (hell my World of Warcraft characters have had it for years) it’s good to see it finally making its way into Windows as native functionality.

There’s also extensive customization abilities available thanks to Microsoft adopting the FIDO Alliance standard rather than developing their own proprietary solution. In addition to the traditional code-generation 2 factor auth you can also use your smartphone as a sort of smartcard with it being automatically recognised when brought next to a bluetooth enabled PC. This opens up the possibility for your phone to be a second factor for a whole range of services and products that currently make use of Microsoft technology, like Active Directory integrated applications. Whilst some might lament that possibility the fact that it’s based on open standards means that such functionality won’t be limited to the Microsoft family of products.

Microsoft has also announced a whole suite of better security features, many of which have been third party products for the better part of a decade. Encryption is now available for the open and save dialogs natively within the Windows APIs, allowing developers to easily integrate encryption functionality into their applications. This comes hand in hand with controls around which applications can access said encrypted data, ensuring that data handling measures can’t be circumvented by using non-standard applications. Device lock down is also now natively supported, eliminating the need for other device access control software like Lumension (which, if you’ve worked with, will likely be thankful for).

It might not be the sexiest thing to be happening in Windows 10 but it’s by far one of the more important. As the defacto platform for many people increases in Windows security are very much welcome and hopefully this will lead to a much more secure computing world for us all. These measures aren’t a silver bullet by any stretch of the imagination but they’ll go a long way to making Windows far more secure than it has been in the past.

Nexus 6

Nexus 6 Announced, Confirms 6 Inches is What Everyone Wants.

For the last 6 months I’ve been on the lookout for the next phone that will replace my Xperia Z. Don’t get me wrong, it’s still quite a capable phone, however not a year has gone by in the past decade that there hasn’t been one phone that triggered my geeky lust, forcing me to part ways with several hundred dollars. However the improvements made since I acquired my last handset have just been evolutionary steps forward, none of which have been compelling enough to make me get my wallet out. I had hoped that the Nexus 6 would be the solution to my woes and, whilst it’s not exactly the technological marvel I was hoping for, Google might just be fortunate enough to get my money this time around.

Nexus 6

The Nexus 6 jumps on the huge screen bandwagon bringing us an (almost) 6″ display boasting a 2560 x 1440 resolution on an AMOLED panel. The specs under the hood are pretty impressive with it sporting a quad core 2.7 GHz SOC with 3GB RAM and a 3220mAh battery. The rest of it is a rather standard affair including things such as the standard array of sensors that everyone has come to expect, a decent camera (that can do usable 4K video) and a choice between 32GB and 64GB worth of storage. If you were upgrading every 2 years or so the Nexus 6 would be an impressive step up however compared to what’s been available in the market for a while now it’s not much more than a giant screen.

You can’t help but compare this phone to the recently released iPhone 6+ which also sports a giant screen and similar specifications. In terms of who comes out ahead it’s not exactly clear as they both seem to win out in various categories (the Nexus 6 has the better screen, the iPhone 6+ is lighter) but then again the main driver of which one of these you’d go for would be more heavily driven by which ecosystem you’d already bought into. I’d be interested to see how these devices compare side by side however as there’s only so much you can tell by looking at spec sheets.

As someone who’s grown accustom to his 5″ screen I was hoping there’d be a diminutive sister of the Nexus 6, much like the iPhone 6. You can still get the Nexus 5, which now sports Android L, however the specs are the same as they ever were which means there’s far less incentive for people like me to upgrade. Talking to friends who’ve made the switch to giant phones like this (and seeing my wife, with her tiny hands, deftly use her Galaxy Note) it seems like the upgrade wouldn’t be too much of a stretch. Had there been a smaller screen I would probably be a little bit more excited about acquiring one as I don’t really have a use case for a much bigger screen than what I have now. That could change once I get some time with the device, though.

So whilst I might not be frothing at the mouth to get Google’s latest handset they might just end up getting my money anyway as there just enough new features for me to justify upgrading my near 2 year old handset. There’s no mistaking that the Nexus 6 is the iPhone 6+ for those on the Android ecosystem and I’m sure there will be many a water cooler conversation over which one of them is the better overall device. For me though the main draw is the stock Android interface with updates that are unimpeded by manufacturers and carriers, something which has been the bane of my Android existence for far too long. Indeed that’s probably the only compelling reason I can see to upgrade to the Nexus 6 at the moment, which is likely enough for some.

tesla-motors-p85d

Tesla Gives us the D: Dual Motors and Autopilot.

The Tesla Model S as we know it today is quite an impressive car. Whilst it’s not exactly within the everyman’s price range yet (getting one landed in Australia likely won’t see much change from $100K) it’s gone a long way to making a high performing electric vehicle available to the masses, especially considering Tesla stance on their patents. Before that electric cars were more of a niche product for the ultra environmentally conscious, combining tiny engines with small frames that would have just enough power to get you to work and back. Now they’re far more easily compared to high end luxury cars and with the new things that Elon announced last week electric cars are heading into a class all of their own.

tesla-motors-p85d

Elon teased last week that he was going to unveil the D soon (and seemingly forgot how much of a dirty mind the entire Internet has) and “something else”. The D was for their new drive train system that incorporates 2 motors, making the Tesla Model S one of the few fully electric all wheel drive cars. The something else turned out to be the debut of their autopilot system, a sort of cut down version of the Google self-driving car. Whilst the D version of the Model S won’t be available for another couple months (although you can order one today) all Model S cars built within the last couple weeks shipped with the autopilot hardware. Suffice to say both these announcements are pretty exciting although the latter probably more so.

The dual motors is an interesting upgrade for the Model S as it’s a pretty common feature among higher end luxury cars, something which it has been lacking. Of particular note is how the dual motor upgrade affects the various aspects of the car, like slashing 0.8 seconds off the 0-100 time (3.2 seconds) and increasing range by about 3.5%, all whilst granting the benefits that all wheel drive provides. Typically you’d be taking a decent hit to range and efficiency due to the increased weight and power requirements but the Model S has managed to come out on top in all respects. Should those figures hold up in real world testing then it’ll speak volumes to the engineering team that Tesla has managed to cultivate.

However the most interesting part for me was the debut of Tesla’s autopilot system. Elon Musk had always been of the mind that a self driving car didn’t need to be an all encompassing thing, instead they should aim to do the majority of tasks first before looking to take the next leap into full automation. Tesla’s autopilot system is the embodiment of that philosophy, taking some of the technology that’s currently available (emergency braking, lane keeping, collision avoidance) and combining it into one seamless package. It won’t get you from point A to point B without human intervention but it’ll happy take over on the highway, park itself in the garage and even meet you at a certain location. It might not be as comprehensive at what Google is looking to create but it’s available today and does almost everything you’d need it to.

I really shouldn’t be surprised that a Musk created company is managing to innovate so quickly in an industry that has long been one of the slowest movers but honestly these two announcements blew me away. The dual motors might not exactly be a revolutionary piece of technology but the way Telsa has done it speaks volumes to the calibre of people that they have working there. The introduction of autopilot in just over a year since they first talked about it really is quite amazing and whilst it might not be the all encompassing system that Google is seeking it will likely be the standard for many years to come. I can’t wait to see what Tesla has in store for us next as they don’t seem to have any intentions of stopping their brisk innovating pace any time soon.

HP_Halves

HP Splits in Two: Hewlett-Packard Enterprise and HP Inc.

The time has long since past when a computer manufacturer could get by on shipping tin. The margins on computer equipment are so low that, most of the time, the equipment they sell is just a loss leader for another part of the business. Nowadays the vast majority of most large computer company’s revenue comes from their services division, usually under the guise of providing the customer a holistic solution rather than just another piece of tin. Thus for many companies the past couple decades have seen them transform from pure hardware businesses into more services focused companies, with several attempting more radical transformations in order to stay relevant. HP has become the most recent company to do this, announcing that they will be splitting the company in half.

HP_Halves

HP will now divest itself into 2 different companies. The first will be Hewlett Packard Enterprise comprising of their server market, services branch and software group. The second will be purely consumer focused, comprising of their personal computer business and their printing branch. If you were going to split a PC business this is pretty much how you’d do it as whilst these functions are somewhat complimentary to each other (especially if you want to be the “end to end” supplier for all things computing) there’s just as many times when they’re at odds. HP’s overarching strategy with this split is to have two companies that can be more agile and innovative in their respective markets and, hopefully, see better margins because of it.

When I first heard the rumours swirling about this potential split the first question that popped into my head was “Where is the services business going?”. As I alluded to before the services business is the money maker for pretty much every large PC manufacturer these days and in this case the enterprise part of HP has come away with it. The numbers only give a slight lead to the new enterprise business in terms of revenue and profit however with the hardware business has been on a slow decline for the past few years which, if I’m honest, paints a bleak picture for HP Inc. going forward. There’s nothing to stop them from developing a services capability (indeed parts of the consumer business already have that) however in its current form I’d put my money on HP Inc. being the one who’s worse off out of this deal.

That could change however if HP’s rhetoric has some merit to it. HP, as it stands today, is an amalgamation of dozens of large companies that it acquired over the years and whilst they all had a similar core theme of being in the IT business there really wasn’t a driving overarching goal for them to adhere to. The split gives them an opportunity to define that more clearly for each of the respective companies, allowing them to more clearly define their mission within each of their designated market segments. Whether that will translate into the innovation and agility that they’re seeking is something we’ll have to see as this is yet another unprecedented change from a large IT conglomerate.

As someone who’s been involved in the IT industry for the better part of 2 decades now the amount of change that’s happened in the last couple years has been, honestly, staggering. We’ve seen IBM sell off some of its core manufacturing capability (the one no one got fired for buying), Dell buy back all its stock to become a private company again and now HP, the last of the 3 PC giants, divest itself into 2 companies. It will likely take years before all the effects of these changes are really felt but suffice to say that the PC industry of the future will look radically different to that of the past.

FULL DISCLOSURE: The writer is a current employee of Dell. All opinions expressed in this article are of the writer’s own and are not representative of Dell.

Windows 10 Start Menu

Windows 10: The Windows 8 For Those Who Can’t Get Over 7.

Microsoft really can’t seem to win sometimes. If they stop making noticeable changes to their products everyone starts whining about how they’re no longer innovating and that people will start to look for alternatives. However should they really try something innovative everyone rebels, pushing Microsoft to go back to the way things ought to be done. It happened with Vista, the Ribbon interface and most recently with Windows 8. Usually what happens though is that the essence of the update makes it into the new version with compromises made to appease those who simply can’t handle change.

And with that, ladies and gentlemen, Microsoft has announced Windows 10.

Windows 10 Start Menu

Everyone seems to be collectively shitting their pants over the fact that Microsoft skipped a version number, somehow forgetting that most of the recent versions of Windows have come sans any number at all. If you want to get pedantic about it (and really, I do) the last 10 versions of Windows have been: Windows 3.1, Windows 95, Windows 98, Windows NT 4.0, Windows 2000, Windows ME (gag), Windows XP, Windows Vista, Windows 7 and Windows 8. If you were expecting them to release Windows 9 because of the last 2 versions of Windows just happened to be in numerical order I’m going to hazard a guess you ate a lot of paint as a child.

On a more serious note the changes that many people were expecting to make up the 8.2 release appear to have been bundled into Windows 10. The start menu makes its triumphant return after 2 years on the sidelines although those modern/metro apps that everyone loved to hate will now make an appearance on there. For someone like me who hasn’t really relied on the start menu even since before Windows 8 arrived (pressing the window key and then typing in what I want is much faster than clicking my way through the menu) I’m none too bothered with its return. It will probably make Windows 10 more attractive to the enterprise though as many of them are still in the midst of upgrading from XP (or purposefully delaying upgrading to 8).

The return of the start menu goes hand in hand with the removal of the metro UI that hosted those kinds of apps, which have now been given the ability to run in a window on the desktop. This is probably one of the better improvements as it no longer means you get a full screen app taking over your desktop if you accidentally click on something that somehow associated itself with a metro app. For me this most often seems to happen with mail as even though I’ve got Outlook installed the Mail app still seems to want to launch itself every so often. Whether or not this will make that style of apps more palatable to the larger world will have to remain to be seen, however.

There’s also been a few other minor updates announced like the inclusion of multiple desktops and improved aero-snap. The command line has also received a usability update, now allowing you to use CTRL + C and CTRL + V to copy and paste respectively. In all honesty if you’re still doing your work in the command line on any version of Windows above Vista you’re doing it wrong as PowerShell has been the shell of choice for everyone for the better part of 7 years. I’m sure some users will be in love with that change but the vast majority of us moved on long ago.

The release date is scheduled for late next year with a technical preview available right now for enterprising enthusiasts. It will be interesting to see what the take up rate is as that date might be a little too late for enterprises who are still running XP who will most likely favour 7 instead. That being said the upgrade path from 7 to 10 is far easier so there is the possibility of Windows 10 seeing a surge in uptake a couple years down the road. For those early adopters of Windows 7 this next release might just be hitting the sweet spot for them to upgrade so there’s every chance that 10 will be as successful as 7.

I’ll reserve my judgement on the new OS until I’ve had a good chance to sit down and use it for an extended period of time. Microsoft rarely makes an OS that’s beyond saving (I’d really only count ME in there) and whilst I might disagree with the masses on 8’s usability I can’t fault Microsoft for capitulating to them. Hopefully the changes aren’t just skin deep as this is shaping up to be the last major revision of Windows we’ll ever see and there’d be nothing worse than for Microsoft to build their future empire on sand.

IMG_4732

ASUS Transformer Pad TF103C Review.

I’ve only really owned one tablet, the original Microsoft Surface RT, and try as I might to integrate it into parts of my life I honestly really can’t figure out where it fits in. Primarily I think this is a function of apps as whilst the Surface is capable in most respects there’s really no killer feature that makes me want to use it for that specific purpose. Indeed this is probably due to my heavy embedding within the Android ecosystem, with all the characteristics that make my phone mine persisted across Google’s cloud. With that in mind when ASUS offered me a review unit of their new Transformer Pad TF103C for a couple weeks to review I was intrigued to see how the experience would compare.

IMG_4732

The TF103C is a 10.1″ tablet, sporting a quad core, 64 bit Intel Atom processor that runs at up to 1.86GHz. For a tablet those specs are pretty high end which, considering the included keyboard signals that the TF103C is aimed more towards productivity than simply being a beefy Android tablet. The screen is an IPS display with a 1200 x 800 resolution which is a little on the low side, especially now that retina level displays are fairly commonplace. You can get it with either 8GB or 16GB of internal storage which you can easily upgrade to 64GB via the embedded SDHC slot. It also includes the usual affair of wireless interfaces, connectors and sensors although one feature of note is the full sized USB port on the dock. With a RRP of $429 (with street prices coming in well under that) there’s definitely a lot packed in the TF103C for the price.

As a full unit the TF103C is actually pretty hefty. coming in at a total 1.1KGs although the tablet itself only makes up about half that. The keyboard dock doesn’t contain an additional battery or anything else that you’d think would make it so heavy, especially considering other chiclet style keyboards come in at about half that. Considering my full ultrabook weighs in at about 1.5KGs it does take away some of the appeal of having a device like this, at least from my perspective. That being said I’m not exactly the biggest tablet user, so the use of two different form factors is lost on me somewhat.

When used in docked form the TF103C is actually quite capable, especially when you attach a mouse to the dock’s USB port. I had wondered how Android would fair when used in a more traditional desktop way and it actually works quite well, mostly since the web versions of your typical productivity applications have evolved a lot in the past couple years. The keyboard is probably a little on the small side for people with larger hands but it was definitely usable for quick tasks or replying to email. It falls a little short if you’re going to use it on your lap however due to the fact that the screen can’t be tilted back past a certain point. It’s still usable but it’s a much better experience when used on a desk.

The quad core Intel Atom powering the TF103C is extremely capable, as evidenced by the fact that everything on it runs without a stutter or hiccup. I threw a few of the more intensive games I could find at it and never noticed any slowdown, commendable for a tablet in this price range. When you’re using such performance however the battery life does take quite a hit, knocking the rated 9.5 hours of run time to less than 4. That being said it managed to stay charged for about a week when it was idle making it quite usable as a casual computing device.

All in all I was impressed with the capabilities the TF103C displayed, even if I couldn’t really see it replacing any one of the devices I have currently. There’s a few missed opportunities, like integrating a battery into the keyboard and allowing the screen to tilt more, however overall it’s a very capable device for the asking price. I could definitely see it having a place on the coffee table as something to be used when needed with the added keyboard dock capability coming in handy for more grunty work. It might not end up replacing the device you have now but if you’re looking for a decent tablet that can also be productive then you wouldn’t go wrong with the TF103C.

A review unit was provided to The Refined Geek for 2 weeks for reviewing purposes.

Medieval vs Modern: The Making of a Gargoyle.

One thing that always fascinates me is how much (or indeed how little) technology can change some processes. Technology almost always makes things better, faster and cheaper but you’d think there’s a few areas where technology simply couldn’t put a dent in good old fashioned human processes. I don’t know why but when I saw the following video I thought there would be no way that modern processes could be better suited to the task than simply giving it over to a stone mason. By the end of the video however I was stunned at just how fast, and accurate, we could mill out a giant block of sandstone.

Honestly I probably should have expected it as I’ve seen numerous demonstrations of similar technology producing wildly intricate show pieces using all sorts of material. However I figured something like this, a craft that many would have thought was now in the domain of only a handful of dedicated practitioners, would be better suited to human hands. I have to say though that I doubt anyone today could carve out something like that in the space of 10 hours, even if you counted in all the preparation time they did before hand. It’s surprisingly hard to find out just how long it took to carve your average stone gargoyle unfortunately so I’m not sure how this compares to times when stone carving a s a profession was more common.

Realistically though that’s all a flimsy premise for me to post yet another large engineering demonstration video. I can’t help it though, they tickle me in all the right ways :)

IBM_Watson

IBM’s Watson has an API, and It’s Answering Questions.

In a world where Siri can book you a restaurant and Google Now can tell you when you should head for the gate at the airport it can feel like the AI future that many sci-fi fantasies envisioned is already here. Indeed to some extent it is, many aspects of our lives are now farmed out to clouds of servers that make decisions for us, but those machines still lack a fundamental understanding of, well, anything. They’re what are called expert systems, algorithms trained on data to make decisions in a narrow problem space. The AI future that we’re heading towards is going to be far more than that, one where those systems actually understand data and can make far better decisions based on that. One of the first steps to this is IBM’s Watson and it’s creators have done something amazing with it.

IBM_Watson

Whilst currently only open to partner developers IBM has created an API for Watson, allowing you to pose it a question and receive an answer. There’s not a lot of information around what data sets it currently understands (the example is in the form of a Jeopardy! question) but their solution documents reference a Watson Content Store which, presumably, has several pre-canned training sets to get companies started with developing solutions. Indeed some of the applications that IBM’s partner agencies have already developed suggest that Watson is quite capable of digesting large swaths of information and providing valuable insights in a relatively short timeframe.

I’m sure many of my IT savvy readers are seeing the parallels between Watson and a lot of the marketing material that surrounds anything with the buzzword “Big Data”. Indeed much of the concepts of operation are similar: take big chunks of data, throw them into a system and then hope that something comes out the other end. However Watson’s API suggests something that’s far more accessible, dealing in native human language and providing evidence to back up the answers it gives you. Compare this to Big Data tools, which often require you to either learn a certain type of language or create convoluted reports, and I think Watson has the ability to find widespread use while Big Data keeps its buzzword status.

For me the big applications for something like this come for places where curating domain specific knowledge is a long, time consuming task. Medicine and law both spring to mind as there’s reams of information available to power a Watson based system and those fields could most certainly benefit from having easier access to those vast treasure troves. It’s pretty easy to imagine a lawyer looking for all precedents set against a certain law or a doctor asking for all diseases with a list of symptoms, both queries answered with all the evidence to boot.

Of course it remains to be seen if Watson is up to the task as whilst it’s prowess on Jeopardy! was nothing short of amazing I’ve still yet to see any of its other applications in use. The partner applications do look very interesting, and should hopefully be the proving grounds that Watson needs, but until it starts seeing widespread use all we really have to go on is the result of a single API call. Still I think it has great potential and hopefully it won’t be too long before the wider public can get access to some of Watson’s computing genius.