Windows 10 is fast shaping up to be one of the greatest Windows releases with numerous consumer facing changes and behind the scenes improvements. Whilst Microsoft has been struggling somewhat to deliver on the rapid pace they promised with the Windows Insider program there has been some progress as of late and a couple new features have made their way into a leaked build. Technology wise they might not be revolutionary ideas, indeed a couple of them are simply reapplications of tech they’ve had for years now, but the improvements they bring speak to Microsoft’s larger strategy of trying to reinvent itself. That might be awfully familiar for those with intimate knowledge of Windows 8 (Windows Blue, anyone?) so it’s interesting to see how this will play out.
First cab off the ranks in Windows 10’s new feature set is a greatly reduced footprint, something that Windows has copped a lot of flak for in the past. Now this might not sound like a big deal on the surface, drives are always getting bigger these days, however the explosion of tablet and portable devices has brought renewed focus on Windows’ rather large install size on these space constrained devices. A typical Windows 8.1 install can easily consume 20GB which, on devices that have only 64GB worth of space, doesn’t leave a lot for a user’s files. Windows 10 brings a couple improvements that free up a good chunk of that space and bring with it a couple cool features.
Windows 10 can now compress system files saving approximately 2GB on a typical install. The feature isn’t on by default, instead during the Windows install the system will be assessed to make sure that compression can happen without impacting user experience. Whether current generation tablet devices will meet the minimum requirements for this is something I’m a little skeptical about so it will be interesting to see how often this feature gets turned on or off.
Additionally Windows 10 does away with the recovery partition on the system drive which is where most of the size savings comes from. Now instead of reserving part of the disk to hold a full copy of the Windows 10 install image, which was used for the refresh and repair features, Windows 10 can rebuild itself in place. This comes with the added advantage of keeping all your installed updates so that refreshed PCs don’t need to go through the hassle of downloading them all again. However in the advent that you do have to do that they’ve included another great piece of technology that should make updating a new PC in your home a little easier.
Windows 10 will include the option of downloading PC updates via a P2P system which you can configure to download updates only from your local network or also PCs on the Internet. It’s essentially an extension of the BranchCache technology that’s been a part of Windows for a while now but it makes it far more accessible, allowing home users to take advantage of it. If you’re running a Windows home (like I am) this will make downloading updates far less painful and, for those of us who format regularly, help greatly when we need to get a bunch of Windows updates again. The Internet enabled feature is mostly for Microsoft’s benefit as it’ll take some load off their servers but should also help out users who are in regions that don’t have great backhaul to the Windows Update servers.
If Microsoft continues to release features like this for Windows 10 then it definitely has a bright future ahead of it. Things like this might not be the sexiest things to talk about but they address real concerns that have plagued Windows for years. In the end they all amount to one thing: a better experience for the consumer, something which Microsoft has fervently increased its focus on as of late. Whether they’ll amount to the panacea to the ills of Windows 8 remains to be seen but suffice to say I’m confident that it’ll line up well.
Microsoft isn’t a company you’d associate with open source. Indeed if you wound back the clock 10 years or so you’d find a company that was outright hostile to the idea, often going to great lengths to ensure open source projects that competed with their offerings would never see the light of day. The Microsoft of today is vastly different, contributing to dozens of open sourced projects and working hard with partner organisations to develop their presence in the ecosystem. For the most part however this has usually been done with an integration view towards their proprietary products which isn’t exactly in-line with the open source ethos. That may be set to change however as Microsoft will be fully open sourcing its .NET framework, the building blocks of all Microsoft applications.
For the uninitiated Microsoft .NET is a development framework that’s been around since the Windows XP days that exposed a consistent set of capabilities which applications could make use of. Essentially this meant that developing a .NET application meant you could guarantee it would work on any computer running that framework, something which wasn’t entirely a given before its inception. It’s since then grown substantially in capability, allowing developers to create some very capable programs using nothing more than the functionality built directly into Windows. Indeed it was so successful in accomplishing its aims that there was already a project going to make it work on non-Windows platforms, dubbed Mono, and it is with them that Microsoft is seeking to release a full open source implementation of the .NET framework.
Whilst this still falls in line with Microsoft’s open source strategy of “things to get people onto the Microsoft platform” it does open up a lot of opportunities for software to be freed from the Microsoft platform. The .NET framework underpins a lot of applications that run on Windows, some that only run on Windows, and an implementation of that framework on another platform could quickly elevate them to cross platform status. Sure, the work to translate them would still likely be non-trivial, however it’ll be a damn sight easier with a full implementation available, possibly enough to tempt some companies to make the investment.
One particularly exciting application of an open sourced .NET framework is games which, traditionally, have an extremely high opportunity cost when porting between platforms. Whilst everything about games development on Windows isn’t strictly .NET there are a lot of .NET based frameworks out there that will be readily portable to new platforms once the open sourcing is complete. I’m not expecting miracles, of course, but it does mean that the future of cross-platform releases is looking a whole bunch brighter than it was just a week ago.
This is probably one of Microsoft’s longest bets in a while as it’s going to be years before the .NET framework sees any kind of solid adoption among the non-Windows crowd. However this does drastically increase the potential of C# and .NET to become the cross platform framework of favour with developers, especially considering the large .NET developer community that already exists today. It’s going to be an area that many of us will be watching with keen interest as it’s yet another signal that Microsoft isn’t the company it used to be, a likely never will be again in the future.
I honestly couldn’t tell you how long I’ve been hearing people talk about Apple getting into the smartwatch business. It seemed every time that WWDC or any other Apple event rolled around there’d be another flurry of speculation as to what their wearable would be. Like most rumours details on it were scant and so the Internet, as always, circlejerked itself into a frenzy about a product that might not have even been in development. In the absence of a real product competitors stepped up to the plate and, to their credit, the devices have started to look more compelling. Well today Apple finally announced their Watch and it’s decidedly mediocre.
For starters it makes the same mistake that many smartwatches do: it follows the current design trend for nearly all other smartwatches. Partly this is due to the nature of LCD screens being rectangular, limiting what you can do with them, however for a company like Apple you’d expect them to buck the trend a bit. Instead you’ve got what looks like an Apple-ized version of the Pebble Steel, not entirely unpleasing but at the same time feeling incredibly bland. I guess if you’re a fan of having a shrunken iPhone on your wrist then the style will appeal to you but honestly smartwatches which look like smartwatches are a definite turn off for me and I know I’m not alone in thinking this.
Details as to what’s actually under the hood of this thing are scarce, probably because unlike most devices Apple announces you won’t be able to get your hands on this one right away. Instead you’ll be waiting until after March next year to get your hands on one and the starting price is somewhere on the order of $350. That’s towards the premium end of the smartwatch spectrum, something which shouldn’t be entirely unexpected, and could be indicative of the overall quality of the device. Indeed what little details they’ve let slip do seem to indicate it’s got some decent materials science behind it (both in the sapphire screen and the case metals) which should hopefully make it a more durable device.
Feature wise it’s pretty much as you’d expect, sporting the usual array of notifications pushed from your phone alongside a typical array of sensors. Apple did finally make its way into the world of NFC today, both with the Apple Watch and the new iPhone, so you’ll be able to load up your credit card details into it and use the watch to make payments. Honestly that’s pretty cool, and definitely something I’d like to see other smartwatch manufacturers emulate, although I’m not entirely hopeful that it’ll work anywhere bar the USA. Apple also toutes an interface that’s been designed around the smaller screen but without an actual sample to look over I really couldn’t tell you how good or bad it would be.
So all that blather and bluster that preceded this announcement was, surprise, completely overblown and the resulting product really does nothing to stand out in the sea of computerized hand adornments. I’m sure there’s going to be a built in market from current Apple fans but outside that I really can’t see the appeal of the Apple Watch over the numerous other devices. Apple does have a good 6 months or so to tweak the product before release so there’s potential for it to become something before they drop it on the public.
I haven’t been an iPhone user for many years now, my iPhone 3GS sitting disused in the drawer beside me ever since it was replaced, mostly because the alternatives presented by other companies have, in my opinion, outclassed them for a long time. This is not to say that I think everything else should replace their phone with a Xperia Z, that particular phone is definitely not for everyone, as I realise that the iPhone fills a need for many people. Indeed it’s the phone I usually recommend to my less technically inclined friends and family members because I know that they have a support system tailored towards them (meaning they’ll bug me less). So whilst today’s announcement of the new models won’t have me opening up my wallet anytime soon it is something I feel I need to be aware of, if only for the small thrill I get for being critical of an Apple product.
So as many had speculated Apple announced 2 new iPhones today: the iPhone 5C which is essentially the entry level model and the iPhone 5S which is the top of the line one with all the latest and greatest features. The most interesting different between the two is the radical difference in design with the 5C looking more like a kids toy with its pastel style colours and the 5S looking distinctly more adult with it’s muted tones of silver, grey and gold. As expected the 5C is the cheaper of the two with the base model starting from AUD$739 and the 5S AUD$869 with the prices ramping up steadily depending on how much storage you want.
The 5C is interesting because everyone was expecting a budget iPhone to come out and Apple’s response is clearly not what most people had in mind. Sure it’s the cheapest model of the lot (bar the Phone 4S) but should you want to upgrade the storage you’re already paying the same amount as the entry level 5S. The difference in features as well are also pretty minimal with the exceptions being an A6 vs A7 processor, slightly bulkier dimensions, new fandangled fingerprint home button and a slightly better camera. Of course those slight differences are usually enough to push any potential iPhone buyer to the higher end model so the question then becomes: who is the 5C marketed towards?
It’s certainly not at the low end of the market, as most people were expecting, even though it looks the part with its all plastic finish (which we haven’t seen since I last used an iPhone). It might appeal to those who like those particular colours although realistically I can’t see that being much of a draw card considering you can buy any colour case for $10 these days. Indeed even when you factor in the typical on contract price for a new iPhone (~$200) the difference between an entry level 5C and 5S is so small that most would likely dole out the extra cash just to have the better version, especially considering how visually different they are.
Another thing running against the 5C is that the 5S shares the same dimensions as the original iPhone 5 allowing you to use all your old cases and accessories with it. I know this won’t be a dealbreaker for many but it seems obvious that the 5S is aimed at people coming from the iPhone 5 whereas the 5C doesn’t appear to have any particular market in mind that necessitates its differences. If this was Apple’s attempt to try and claw back some of the market that Android has been happily dominating then I can help but feel it’s completely misguided. Then again I lost my desire for Apple products years ago so I might be missing out on what the appeal of a gimped, not-really-budget Apple handset might be.
The iPhone 5S does look like a decent phone sporting most of the features you’d expect from a current generation smart phone. NFC is still missing which, if I’m honest, isn’t as big of a deal as I used to make it out to be as I’ve now got a NFC phone and I can’t use it for jack so I don’t count it as downer anymore. As always though the price of a comparable Android handset to what you get from Apple is a big sore point with the top of the line model topping out at an incredible AUD$1129. I know Apple is a premium brand but when the price difference between the high and low end is $260 and the only difference is storage you really have to ask if its worth it, especially when comparable Android phones will have the same level of features and will be cheaper (my 16GB Xperia Z was $768 for reference).
I will be really interested to see how the 5C pans out as many are billing it as the “budget” iPhone that everyone was after when in truth it’s anything but that. The 5S is your typical product refresh cycle from Apple, bringing in a few new cool things but nothing particularly revolutionary. Of course you should consider everything I’ve said through the eyes of a long time Android user and lover as whilst I’ve owned an iPhone before it’s been so long between drinks that I can barely remember the experience anymore. Still I’m sure at least the 5S will do well in the marketplace as all the flagship Apple phones do.
You know who gets a ton of my money these days? Game publishers. Whilst they might not get the same amount per sale that they used to the amount I pump into the industry per year has rocketed up in direct correlation with my ability to pay. Nearly every game you see reviewed on here is purchased gladly with my own money and I would happily do the same with all forms of entertainment if they provided the same level of service that the games industry does. However my fellow Australian citizens will know the pain that we routinely endure here with delayed releases and high prices, so much so that our Parliament subpoenaed several major tech companies to have them explain themselves.
If I’m honest though I had thought the situation was getting a bit better, that was until I caught wind of this:
I saw the trailer for Cloud Atlas sometime last year and the concept instantly intrigued me. As someone who’s nascent years were spent idolizing The Matrix I’ve always been a fan of the Wachowskis’ work and so of course their latest movie was of particular interest. Since I’m on the mailing list for my local preferred cinema (Dendy, in case you’re wondering) I simply waited for the email announcing it. For months and months I waited to see something come out until I started hearing friends talking about how they had seen it already. Curious I checked my favourite Usenet site and lo and behold it was available, which mean only one thing.
It was available on DVD elsewhere.
That email I was waiting for arrived a couple days ago, 4 months after the original theatrical release in markets overseas. Now I know it’s not that hard to get a film approved in Australia nor is it that difficult to get it shipped over here (even if it was shot on film) so what could be the reason for such a long delay? As far as I can tell it’s the distributors holding onto their out dated business models in a digital era where they have to create artificial scarcity in order to try and bilk more money out of the end consumers. I’ve deliberately not seen movies in cinemas in the past due to shenanigans like this and Cloud Atlas is likely going to be the latest entry on my civil disobedience list.
I seriously can’t understand why movie studios continue with behaviour like this which is what drives customers to seek out other, illegitimate means of getting at their content. I am more than happy to pay (and, in the case of things like Cloud Atlas, at a premium) for content like this but I do not want my money going to businesses that fail to adapt their practices to the modern world. Artificial scarcity is right up there with restrictive DRM schemes in my book as they provide absolutely no benefit for the end user and only serve to make the illegitimate product better. Really when we’re hit from all sides with crap like this is it any surprise that we’re a big ole nation o pirates?
A decade ago many of my generation simply lacked the required disposable income in order to support their habits and piracy was the norm. We’ve all grown up now though with many of us having incomes that we could only dream of back then, enough for us to begin paying for the things we want. Indeed many of us are doing that where we’re able to but far too many industries are simply ignoring our spending habits in favour of sticking to their traditional business models. This isn’t sustainable for them and it frustrates me endlessly that we still have to deal with shit like this when it’s been proven that this Internet thing isn’t going away any time soon. So stop this artificial scarcity bullshit, embrace our ideals and I think you’ll find a torrent of new money heading in your direction. Enough so that you’ll wonder why you held such draconian views for so long.
The defacto platform of choice for any gamer used to be the Microsoft Windows based PC however the last decade has seen that change to be some form of console. Today, whilst we’re seeing something of a resurgence in the PC market thanks in part to some good releases this year and ageing console hardware, PCs are somewhere on the order take about 5% of the video game market. If we then extrapolate from there using the fact that only about 1~2% of the PC market is Linux (although this number could be higher if restricted to gamers) then you can see why many companies have ignored it for so long, it just doesn’t make financial sense to get into it. However there’s been a few recent announcements that shows there’s an increasing amount of attention being paid to this ultra-niche and that makes for some interesting speculation.
Gaming on Linux has always been an exercise in frustration, usually due to the Windows-centric nature of the gaming industry. Back in the day Linux suffered from a lack of good driver support for modern graphics cards and this made it nearly impossible to get games running on there at an acceptable level. Once that was sorted out (whether you count binary blobs as “sorted” is up to you) there was still the issue that most games were simply not coded for Linux leaving their users with very few options. Many chose to run their games through WINE or Cedega which actually works quite well, especially for popular titles, but many where still left wanting for titles that would run natively. The Humble Indie Bundle has gone a long way to getting developers working on Linux but it’s still something of a poor cousin to the Windows Platform.
Late last year saw Valve open up beta access to Steam on Linux bringing with it some 50 odd titles to the platform. It came as little surprise that they did this considering that they did the same thing with OSX just over 2 years ago which was undoubtedly a success for them. I haven’t really heard much on it since then, mostly because none of my gamer friends run Linux, but there’s evidence to suggest that it’s going pretty well as Valve is making further bets on Linux. As it turns out their upcoming Steam Box will be running some form of Linux under the hood:
Valve’s engineer talked about their labs and that they want to change the “frustrating lack of innovation in the area of computer hardware”. He also mentioned a console launch in 2013 and that it will specifically use Linux and not Windows. Furthermore he said that Valve’s labs will reveal yet another new hardware in 2013, most likely rumored controllers and VR equipment but we can expect some new exciting stuff.
I’ll be honest and say that I really didn’t expect this even with all the bellyaching people have been doing about Windows 8. You see whilst being able to brag about 55 titles being on the platform already that’s only 2% of their current catalogue. You could argue that emulation is good enough now that all the titles could be made available through the use of WINE which is a possibility but Valve doesn’t offer that option with OSX currently so it’s unlikely to happen. Realistically unless the current developers have intentions to do a Linux release now the release of the Steam Box/Steam on Linux isn’t going to be enough to tempt them to do it, especially if they’ve already recovered their costs from PC sales.
That being said all it might take is one industry heavyweight to put their weight behind Linux to start a cascade of others doing the same. As it turns out Blizzard is doing just that with one of their titles slated for a Linux release some time this year. Blizzard has a long history with cross platform releases as they were one of the few companies to do releases for Mac OS decades ago and they’ve stated many times that they have a Linux World of Warcraft client that they’ve shied away from releasing due to support concerns. Releasing an official client for one of their games on Linux will be their way of verifying whether its worth it for them to continue doing so and should it prove successful it could be the shot in the arm that Linux needs to become a viable platform for games developers to target.
Does this mean that I’ll be switching over? Probably not as I’m a Microsoft guy at heart and I know my current platform too well to just drop it for something else (even though I do have a lot of experience with Linux). I’m very interested to see how the Steam Box is going to be positioned as it being Linux changes the idea I had in my head for it and makes Valve’s previous comments about them all the more intriguing. Whilst 2013 might not be a blockbuster year for Linux gaming it is shaping up to be the turning point where it starts to become viable.
I wasn’t going to write about Apple’s latest release in the iPad Mini and iPad 4 mostly because there wasn’t really anything to write about. The iPad 4 was a bit of a shock considering that the 3 is barely 6 months old and was a pretty significant upgrade over its predecessor so you wouldn’t really think it needed a refresh this early on. The iPad Mini was widely rumoured for a very long time, so much so that blogging about it would feel like I was coming incredibly late to a party that I didn’t really care about in the first place. Thinking about it more though the iPad Mini represents a lot more than just Apple releasing yet another iOS product, it’s a sign of how Apple is no longer in control of the market they created.
Steve Jobs famously said that a tablet smaller than the iPad wouldn’t make any sense as it’d be too small to compete with regular tablets and too big to compete with smart phones. With Apple’s relatively long development cycle its likely that he was aware of the iPad Mini development but I don’t think the idea for its creation came from him. It was easy for him to make judgements from atop the massive tower of iPad sales that he was sitting on at the time however I don’t think he expected them to be as successful as they were. None of them can match the iPad for total numbers sold yet but that doesn’t mean there isn’t a niche area that Apple was failing to exploit.
It all started with the Kindle Fire just over a year ago. The tablet was squarely aimed at a particular market, one that didn’t want to spend a lot on a tablet device and was happy to accept a lower end device in return. This proved to be wildly popular and as of this month Amazon has shipped over 7 million of the devices putting it second only the iPad itself in terms of sales. This in turn drew other companies to the small tablet form factor with the most notable recent addition being the Google Nexus 7 which as of writing has already sold an estimated 3 million units world wide. Apple can’t have been ignorant of this and saw that there was a rather large niche that they weren’t exploiting, hence the release of the iPad Mini.
For a company that’s been making and dominating markets for a decade now the iPad Mini then represents the first product Apple’s created as a reaction to market forces. Whilst we can always point to technology companies that did what did before they entered the market they’re usually no where near as successful. With the small tablet form factor sector however there are multiple companies who have managed to make quite a killing in this particular space prior to Apple entering. You could argue that Apple still owns the tablet space as a whole (and that’s true, to a point) but when it comes to form factors other than those of the traditional iPad Apple has been absent up until this week, and that’s lost money they’ll never recover.
Comparatively it’s a small slice of the overall tablet pie which Apple is still getting the lion’s share of. Even though they might’ve lost 10 million potential sales to a niche market they weren’t filling they still managed to ship 14 million iPads last quarter. Their figures for this quarter might be down on what people were expecting however with the release of the new iPad and the iPad Mini right before the holiday season it’s very likely that they’ll make up that shortfall without too much trouble. Whether that will translate into dominance of the smaller form factor tablet market is up for debate and realistically we’ll only know once next quarter’s results come in.
Whilst I don’t believe this is the beginning of the end for Apple it is the first product to come from them in a long time that, as far as I can tell, is a reaction to the market rather than them attempting to create one. That’s a very different Apple than the one we’re used to seeing and whilst it isn’t necessarily a bad thing (dominating semi-established markets seems to be their bread and butter) it does make you wonder if their focus has shifted away from market creation. I don’t really know enough to answer that but if you were still wondering what Apple under Tim Cook would look like then you might be seeing the beginnings of an answer here. Whether that’s good or not is an exercise I’ll leave for the reader.
There’s only one thing that I don’t like about my little 60D and that’s the fact that it’s not a full frame camera. For the uninitiated this means that the sensor contained within the camera, the thing that actually records the image, is smaller than the standard 35mm size which was prevalent during the film days. This means that in comparison to its bigger brothers in more serious cameras there are some trade offs made, most done in the name of reducing cost. Indeed for comparison a full framed camera would be over double the price I paid for my 60D and would actually lack some of the features that I considered useful (like the screen that swings out). The rumour mill has been churning for quite a while that Canon would eventually release an affordable full frame DSLR at this year’s Photokina and the prospect really excited me, even if my 60D is still only months old at this point.
News broke late yesterday that yes the rumours were true and Canon was releasing a new camera called the EOS 6D which was in essence a full frame camera for the masses. The nomenclature would have you believe that it was in fact a full frame upgrade for the 60D, something that was widely rumoured to be the case, but diving into the specifications reveals that it shares a lot more with the 5D lineage than it does with its prosumer cousin. This doesn’t mean the camera is more focused on the professional field, indeed the inclusion of things like wifi and GPS are usually considered to be conusmer features (I’ve had them in my Sony pocket cam for years, for example), but if I’m honest the picture I built up of the new camera in my head doesn’t exactly align with what Canon has revealed and that’s left me somewhat disappointed.
Before I get into that though let me list off the things that are really quite awesome about the 6D. The full frame sensor in a camera that will cost $2099 is pretty damn phenomenal even if that’s still well out of the range of the people buying in the 60D range. It’s actually the cheapest full frame DSLR available (even the Sony fixed lens full frame is $700 more) and that in itself is an achievement worth celebrating. All the benefits of the bigger sensor are a given (better low light performance, crazy ISOs and better resolution) and the addition of WiFi and GPS means that the 6D is definitely one of the most feature packed cameras Canon has ever released. Still it’s the omission of certain features and reduction in others that’s left me wondering if it’s worth me upgrading to it.
For starters there’s the lack of an articulated screen. It sounds like a small thing as there are external monitor solutions that would get me similar functionality but I’ve found that little flip out screen on my 60D so damn useful that it pains me to give it up. The reasons behind its absence are sound though as they want to make the 6D one of their more sturdier cameras (it’s fully weather sealed as well) and an articulated screen is arguably working against them in that regard.
There’s also the auto-focus system which only comes with 11 focus points of which only 1 is cross type. This is a pretty significant step down from the 60D and coming from someone who struggled with their 400D’s lackluster autofocus system I can’t really see myself wanting to go back to that. It could very well be fine but on paper it doesn’t make me want to throw my money recklessly in Canon’s direction like I did with all the rumours leading up to this point.
One thing could sway me and that would be if MagicLatern made its way onto the 6D platform. The amount of features you unlock by running this software is simply incredible and whilst it won’t fix the 2 things that have failed to impress me it would make the 6D much more palatable for me. Considering that the team behind it just managed to get their software working on the ever elusive 7D there’s a good chance of it happening and I’ll have to see how I feel about the 6D after that happens.
Realistically the disappointment I’m feeling is my fault. I broke my rule about avoiding the hype and built up an image of the product that had no basis in reality. When it didn’t match those expectations exactly I was, of course, let down and there’s really nothing Canon could have done to prevent that. Maybe as time goes on the idea of the 6D will grow on me a bit more and then after another red wine filled night you might see another vague tweet that indicates I’ve changed my mind.
Time to restock the wine rack, methinks.
One of my most hotly anticipated games for this year, and I know I’m not alone in this, will be Blizzard’s Diablo III. I can remember the days of the original Diablo, forging my way down into the bowels of the abandoned church and almost leaping out of my chair when the butcher growled “Aaaahhh, fresh meat!” when I grew close to him. I then went online, firing up my 33K modem (yes, that’s all I had back then) and hitting up the then fledgling Battle.Net only to be overwhelmed by other players who gifted me with unimaginable loot. I even went as far as to buy the only official expansion, Hellfire, and play that to its fullest revelling in the extended Diablo universe.
Diablo II was a completely different experience, one that was far more social for me than its predecessor. I can remember many LANs dedicated to simply creating new characters and seeing how far we could get with them before we got bored. The captivation was turned up to a whole new level however with many of us running dungeons continuously in order to get that last set item or hoping for that extremely rare drop. The expansion pack served to keep us playing for many years after the games release and I still have friends telling me of how they’ve spun it back up again just for the sheer thrill of it.
Amongst all this is one constant: the torturous strain that we put on our poor computer mice. The Diablo series can be played almost entirely using the mouse thanks to the way the game was designed, although you do still need the keyboard especially at higher difficulties. In that regard it seemed like the Diablo series was destined to PC and PC only forever more. Indeed even though Blizzard had experimented with the wild idea of putting StarCraft on the Nintendo64 they did not attempt the same thing with the Diablo series. That is up until now.
Today there are multiple sources reporting that Diablo III will indeed be coming to consoles. As Kotaku points out the writing has been on the wall for quite some time about this but today is the day when everyone has started to pay attention to the idea. Now I don’t think there’s anything about the Diablo gameplay that would prevent it from being good on a console, as opposed to StarCraft (which would be unplayable, as is any RTS on a console). Indeed the simple interface of Diablo’s past would easily lend itself well to the limited input space of the controller with few UI changes needed. What concerns most people though is the possibility that Diablo III could become consolized, ruining the experience for PC gamers.
Considering that we’re already got a beta version of Diablo III on PC it’s a safe bet that the primary platform will be the PC. Blizzard also has a staunch commitment to not launching games until their done and you can bet that if there were any hints of consolization in one of their flagship titles it’d be picked up in beta testing long before it became a retail product. Diablo III coming to consoles is a sign of the times that PC gaming is still somewhat of a minority and even titles that have their roots firmly in the PC platform still need to consider a cross platform release.
Does this mean I’ll play Diablo III on one of my consoles? I must say that I’m definitely curious but I’ve already put in my pre-order for the collector’s edition of Diablo III on the PC. Due to the tie in with Battle.Net it’s entirely possible that buying it on one platform will gain you access to another via a digital download (something Blizzard has embraced wholeheartedly) and I can definitely see myself trying it out just for comparison. For me though the PC platform will always be my primary means by which I game and I can’t deny my mouse the torturous joy that comes from a good old fashioned Diablo session.
My post last week about the trials and tribulations of sorting ones media collection struck a chord with a lot of my friends. Like me they’d been doing this sort of thing for decades and the fact that none of us had any kind of sense to our sorting systems (apart from the common thread of “just leave it where it lies”) came at something of a surprise. I mean just taking the desk I’m sitting at right now for an example it’s clear of everything bar computer equipment and the stuff I bring in with me every day. The fact that this kind of organization doesn’t extend to our file systems means that we either simply don’t care enough or that it’s just too bothersome to get things sorted. Whilst I can’t change the former I decided I could do something about the latter.
So my quest last week proving fruitless I set about developing a program that could sort media based on a couple cues derived from the files themselves. Now for the most part media files have a few clues as to what they actually are. For the more organized of us the top level folder will contain the episode name but since mine was all over the place I figured it couldn’t be trusted. Instead I figured that the file name would be semi-reliable based on a cursory glance at my media folder and that most of them were single strings delimited with only a few characters. Additionally the identifier for season and episode number is usually pretty standard (S01E01, 2×01,1008, etc) so that pulling the season out of them would be relatively easy. What I was missing was something to verify that I was looking in the right place and that’s where I TheTVDB comes in.
The TV Database is like IMDB for TV shows except that it’s all community driven. Also unlike IMDB they have a really nice API that someone has wrapped up in a nice C# library that I could just import straight into my project. What I use this for is a kind of fuzzy matching filter for TV show names so that I can generate a folder with the correct name. At this point I could also probably rename the files with the right name (if I was so inclined) but for the point of making the tool simple I opted not to do this (at this point). With that under my belt I started on the really hard stuff: figuring out how to sort the damn files.
Now I could have cracked open the source of some other renaming programs to see how they did it but I figured out a half decent process after pondering the idea for a short while. It’s a multi-stage process that makes a few assumptions but seems to work well for my test data. First I take the file name and split it up based on common delimiters used in media files. Then I build up a search string using those broken up names stopping when I hit a string that matches a season/episode identifier. I then add that into a list of search terms to query for later, checking first to see if it’s already added. If it’s already in there I then add the file path into another list for that specific search term, so that I know that all files under that search term belong to the same series. Finally I create the new file location string and then present this all to the user, which ends up looking like this:
The view you see here is just a straight up data table of the list of files that Sortilio has found and identified as media (basically anything with the extension .avi or .mkv currently) and the confidence level it has in its ability to sort said media. Green means that in the search for the series name it only found one match, so it’s a pretty good assumption that it’s got it right. Yellow means that when I was doing a search for that particular title I got multiple responses back from TheTVDB so the confidence in the result is a little lower. Right now all I do is take the first response and use that for verification which has served me well with the test data, but I can easily see how that could go wrong. Red means I couldn’t find any match at all (you can see what terms I was searching for in the debug log) and everything marked like that will end up in one giant “Unsorted” folder for manual processing. Once you hit the sort button it will perform the move operations, and suffice to say, it works pretty darn well:
Of course it’s your standard hacked-together-over-the-weekend type deal with a lot of not quite necessary but really nice to have features left out. For starters there’s no way to tell it that a file belongs to a certain series (like if something is misspelled) or if it picks the wrong series to tell it to pick another. Eventually I’m planning to make it so you can click on the items and change the series, along with a nice dialog box to search for new ones should it not get it right. This means you might want to do this on a small subset of your media each time (another thing I can code in) as otherwise you might get files ending up in strange folders.
Also lacking is any kind of options page where you can specify things like other extensions, regex expressions for season/episode matching and a whole host of other preferences that are currently hard coded in. These things are nice to have but take forever to get right so they’ll eventually make their way into another revision but for now you’re stuck with the way I think things should be done. Granted I believe they’ll work for the majority of people out there, but I won’t blame you if you wait for the next release.
Finally the code will eventually be open sourced once I get it to a point where I’m not so embarrassed by it. If you really want to know what I did in the ~400 odd lines that constitute this program then shoot me an email/twitter and I’ll send the source code to you. Realistically any half decent programmer could come up with this in half the amount of time I did so I can’t imagine anyone will need it yet, unless you really need to save 3 hours
So without further ado, Sortilio can be had here. Download it, unleash it on your media files and let me know how it works for you. Comments, questions, bugs and feature requests can be left here as a comment, an @ message on Twitter or you can email me on [email protected].