If you’ll allow me to get a little hipster for a second you’ll be pleased to find out that I’ve been into the whole Multiplayer Online Battle Arena (MOBA) scene since it first found its roots way back in Warcraft 3. Back then it was just another custom map that I played along with all the other customs I enjoyed, mostly because I suffered from some extreme ladder anxiety. Since then I’ve played my way through all of the DOTA clones that came out (Heroes of Newerth, Leaggue of Legends and even that ill fated experiment from GPG, Demigod) but none of them captured me quite as much as the seemingly official successor, DOTA 2, has.
Defense of the Ancients 2 should be familiar to anyone who played the original DOTA or one of the many games that followed it. In a team of 5 you compete as single heros, choosing from a wide selection who all have unique abilities and uses, pushing up one of three lanes with a bunch of NPC creeps at your side. The ultimate goal is the enemies ancient, a very well defended building that will take the concerted effort of all team members to reach and finally, destroy. There are of course many nuances to what would, on the surface, seem to be a simple game and it’s these subtleties which make the game so engrossing.
When compared to its predecessor that was limited by the graphics engine of WarCraft 3 DOTA2 stands out as a definite improvement. It’s not a graphical marvel, much like many of the MOBA genre, instead favoring heavily stylized graphics much like Blizzard does for many of their games. The recent updates to DOTA2 have seen some significant improvements over the first few initial releases both in terms of in-game graphics and the surrounding UI elements. Valve appears to be heavily committed to ensuring DOTA2’s success and the graphical improvements are just the tip of the iceberg in this regard.
Back in the old days of the original DOTA the worst aspect of it was finding a game and then hoping that no one would drop out prematurely. There were many 3rd party solutions to this problem, most of which were semi-effective but were open to abuse and misuse, but none of them could solve the problem of playing a game with similarly skilled players. DOTA2, like nearly every other MOBA title, brings in a matchmaking system that will pair you up with other players and also brings with it the ability to rejoin a game should your client crash or your connection drop out.
Unfortunately since DOTA2 is still in beta the matchmaking system is not yet entirely working as I believe it’s intended to. It does make the process of finding, joining and completing a game much more streamlined but it is blissfully unaware of how skilled a potential player is. What this means is that the games have a tendency to swing wildly in one teams favour and unlike other games where this leads to a quick demise (thus freeing you up toplay again) DOTA instead is a drawn out process and should you decide to leave prematurely you’ll be hit with a dreaded “abandoned” mark next to your record. This is not an insurmountable probelm though and I’m sure that future revision of DOTA2 will address this issue.
The core gameplay of DOTA2 is for the most part unchanged from back in the days of the original DOTA. You still get your pick from a very wide selection of heros (I believe most of the AllStars team are in there), the items have the same names and you still go through each of the main game phases (laneing, pushing, ganking) as the game progresses. There have been some improvements to take away some of the more esoteric aspects of DOTA2 and for the most part they’re quite welcome.
Gone are the days where crafting items required either in depth knowledge of what made what or squinting at the recipe text, instead you can click on the ultimate item you want to craft and see what items go in to make it. Additionally there’s a list of suggested items for you hero which, whilst not being entirely appropriate for every situation, will help to ease players into the game as they learn some of the more intricate aspects of iteming a character correctly. It’s still rather easy to draw the ire of players who think they know everything there is to know about certain characters (I’ll touch more on the community later) but at least you won’t be completely useless if you stick to the item choices the game presents for you.
Know which hero to pick is just as important as knowing how to item them and thankfully there are some improvements to the hero choosing system that should make do so a little easier for everyone. Whilst the hero picking has always made delineations between int/str/agi based heros you can now also filter for things like what kind of role the character fills like support, ganker or initiator. For public games though it seems everyone wants to play a carry (mostly because they’re the most fun) and there’s little heed paid to good group composition but this is not a fault of the game per se, but there is potential there for sexing up the lesser played types so that pub compositions don’t end up as carry on carry battles.
It’s probably due to the years of play testing that the original DOTA received but the heroes of DOTA2 are fairly well balanced with no outright broken or overpowered heroes dominating the metagame. There are of course heros that appear to be broken in certain situations (I had the pleasure of seeing Outworld Destroyer killing my entire team in the space of 10 seconds) but in reality it’s the player behind that character making them appear broken. This bodes well for the eSports scene that Valve is fostering around DOTA2 and they’re going to need to keep up this level of commitment if they want a chance of dethroning the current king, League of Legends.
The eSports focused improvements in DOTA2 are setting the bar for new game developers who have their eye on developing an eSports scene for their current and future products. The main login screen has a list of the top 3 spectated games and with a single click you can jump in and watch them with a 2 minute delay. This can be done while you’r waiting to join a game yourself and once your game is ready to play you’re just another click away from joining in on the action. It’s a fantastic way for both newcomers and veterans of the genre to get involved in the eSports scene, but that’s just he start of it.
Replays can be accessed directly from a player’s profile or downloaded from the Internet. Game casters can embed audio directly into the replay allowing users to watch the replay in game with the caster’s commentary.They can also watch the caster’s view of the game, use a free camera or using the built in smart camera that will automatically focus on the place where the most action is happening. It’s a vast improvement over how nearly all other games do their replays and Valve really has to be commended for the work they’ve done here.
For all the improvements however there’s one thing that DOTA2 can’t seem to get away from and that’s its elitist, almost poisonous community that is very hostile to new players. Whilst the scsreenshot above is a somewhat tongue-in-cheek example of the behavior that besots the DOTA2 community it still holds true that whilst many concessions have been made to make the game more palatable for newcomers the DOTA2 community still struggles with bringing in new players to the fold. League of Legends on the other hand crack this code very early on and the following success is a testament to how making the game more inviting for new users is the ultimate way to drive the game forward. I don’t have an answer as to how to fix this (and whilst I say LoL cracked the code I’m not 100% sure their solution is portable to DOTA2) and it will be very interesting to see how DOTA2 develops in the shaodw of the current MOBA king.
DOTA2 managed to engage me in a way that only one other game has managed to do recently and I belive there’s something to that. Maybe it’s a bit of nostalgia or possibly my inner eSports fan wanting to dive deep into another competitive scene but DOTA2 has really upped the MOBA experience that I first got hooked on all those years ago and failed to rekindle with all the other titles in this genre. I’d tell you to go out and buy it now but it’s still currently in beta so if you can get your hands on a key I’d definitely recommend doing so and if you’re new to this kind of game just ignore the haters, you won’t have to deal with them for long.
Defense of the Ancients 2 is currently in beta on PC. Approximately 60 hours of total game play were undertaken prior to this review with a record of 32 wins to 36 losses.
Voice controlled computers and electronics have always been a staple science fiction, flaunting with the idea that we could simply issue commands to our silicone based underlings and have them do our bidding. Even though technology has come an incredibly long way in the past couple decades understanding natural language is still a challenge that remains unconquered. Modern day speech recognition systems often rely on key words in order to perform the required commands, usually forcing the user to use unnatural language in order to get what they want. Apple’s latest innovation, Siri, seems to be a step forward in this regard and could potentially signal in a shift in the way people use their smartphones and other devices.
On the surface Siri appears to understand quite a bit of natural language, being able to understand that a single task can be said in several different ways. Siri also appears to have a basic conversational engine in it as well so that it can interpret commands in the context of what you’ve said to it before. The scope of what Siri can do however is quite limited but that’s not necessarily a bad thing as being able to nail a handful of actions from natural language is still leaps and bounds above what other voice recognition systems are currently capable of.
Siri also has a sense of humour, often replying to out of left field questions with little quips or amusing shut downs. I was however disappointed with the response for a classic nerd line of “Tea. Earl Grey. Hot” which recieved the following response:
This screen shot also shows that Siri’s speech recognition isn’t always 100% either, especially when it’s trying to guess what you were saying.
Many are quick to draw the comparison between Android’s voice command system and apps available on the platform like Vlingo. The big difference there though is that these services are much more like search engines than Siri, performing the required actions only if you utter the commands and key words in the right order. That’s the way nearly all voice operated systems have worked in the past (like those automated call centres that everyone hates) and are usually the reason why most people are disappointed in them. Siri has the one up here as people are being encouraged to speak to it in a natural way, rather than changing the way they speak in order to be able to use it.
For all the good that Siri is capable of accomplishing it’s still at it’s heart a voice recognition system and with that comes some severe limitations. Ambient noise, including others talking around you, will confuse Siri completely making it unusable unless you’re in relatively quite area. I’m not just saying this as a general thing either, friends with Siri have mentioned this as one of its short comings. Of course this isn’t unique to Siri and is unlikely to be a problem that can be overcome by technology alone (unless you could speak to Siri via a brain implant, say).
Like many other voice recognition systems Siri is geared more toward the accent of the country it was developed in, I.E. American. This isn’t just limited to the different spellings between say the Queen’s English and American English but also for the inflections and nuances that different accents introduce. Siri will also fall in a crying heap if the pronunciation and spelling are different as well, again limiting its usefulness. This is a problem that can and has been overcome in the past by other speech recognition systems and I would expect that with additional languages for Siri already on the way that these kinds of problems will eventually be solved.
A fun little fact that I came across in my research for this post was that Apple still considered Siri to be a beta product (right at the bottom, in small text that’s easy to miss). That’s unusual for Apple as they’re not one to release a product unfinished, even if that comes at the cost of features not making it in. In a global sense Siri really is still beta with some of her services, like Yelp and location based stuff, not being available to people outside of the USA (like the above screenshot shows). Apple is of course working to make them all available but it’s quite unusual for them to do something in this fashion.
So is Siri the next step in user interfaces? I don’t think so. It’s a great step forward for sure and there will be people who make heavy use of it in their daily activities. However once the novelty wears off and the witty responses run out I don’t see a compelling reason for people to continue using Siri. The lack of a developer API as well (and no mention of whether one will be available) means that the services that can be hooked into Siri are limited to those that Apple will develop, meaning some really useful services might never be integrated forcing users to go back to native apps. Depending on how many services are excluded people may just find it easier to not use Siri at all, opting for the already (usually quite good) native app experience. I could be proven wrong on this, especially with technology like Watson on the horizon, but for now Siri’s more of a curiosity than anything else.
It’s been a long 7 months since I first laid eyes on Xcode and the iOS SDK all that time ago and I’ve had quite the love/hate relationship with it. There were times when I could spend only a couple hours coding and blast through features barely breaking a sweat, and others when I’d spend multiple torturous hours figuring out why something just wasn’t working the way I thought it should. The last couple months have been quite successful as my code base has grown large enough to cover most of the rudimentary functions I use constantly and my muscle memory with certain functions is approaching a usable level. Last weekend it all came to head after I polished off the last of my TODO list and sank back into my chair.
Then it hit me, this was a feature complete 1.0 release.
Apart from the achievements (which are barely implemented in the web client) you can do everything on the iPhone client that you could do with the full web client. I’ve taken design cues from many iPhone applications that I’ve been using and I feel its quite usable, especially if you’re familiar with the myriad of Twitter clients out there. I’ve been fiddling with it over the past few days and it seems to be stable enough for me to unleash on others to see how it goes and that’s where you, my faithful readers, come into play.
I’m looking for people to beta test this application pending a full release of it to the app store. If you’re interested in testing out the application and have any 3G and up iPhone (2G might work, but it would be dreadfully slow) hit me up on my gmail [email protected] and we’ll take it from there. I haven’t really experimented with Apple’s beta testing yet so the first lot of you are more than likely to be in for a fun ride as I stumble my way through deploying the application to you, but this is all part of the fun of being a very, very early adopter 🙂
Despite all the trials and tribulations that developing this client has brought me the experience is proving to be invaluable as it’s helped me refine the idea down to the core ideal I started with almost 2 years ago: getting people communicating around a location. It’s also been the first new language I’ve learned in almost 5 years and it has reminded me just how much fun it was learning and creating in a completely new environment, so much so that I’m almost completely sold on the idea of recoding the web client in Ruby on Rails. Still that’s all pie in the sky stuff for now as the next big improvement to Lobaco is moving the entire service off my poor VPS and into the wonderful world of the cloud, most likely Windows Azure. I hope you’ll jump on board with me for testing Lobaco and hopefully in the future this will grow into something much more than my pet project.
Sandbox games and I have a sordid history. Whilst I often enjoy them it’s not usually because of the engrossing story or intriguing game mechanics; more it’s after I’ve finished the mission at hand, saved my game and then promptly engage Jerk Mode and go on whatever kind of rampage the game allows me. Long time readers will remember this being the case in my Just Cause 2 review where I grew tried of having to do everything within the rules of the game and modded my way to Jerk nirvana. Still there have been some notable exceptions, like Red Dead Redemption, where the combination of certain elements came together in just the right way to get me completely draw in an engrossed in the story.
Minecraft, whilst sharing the sandbox title, has almost no elements of a traditional game in this genre. Having more in common with game mods like Gary’s Mod Minecraft throws you into a world where the possibilities really are only limited by your imagination. Over the past few months I have watched the news around it go from a single story to a media storm and I was always fascinated by the way it managed to draw people into it. Up until a couple weeks ago however I hadn’t bothered to try it for myself, not even the free version. However after watching a few videos of some of the more rudimentary aspects of the game I decided to give it a go, and shelled out the requisite $20 for the full (beta) version.
That’s a deep mine…
The premise of the game is extremely simple. You’re thrust into a world where everything is made of blocks and at night time hordes of zombies and other nefarious creatures will emerge from the wilderness, baying for your blood. The tools you have at your disposal are only your blocky hands but the world of blocks around you can be used to your advantage. By cutting down trees you can make wood which can then be converted into a whole range of tools. The race is then on to create some kind of shelter before nightfall comes, so that you might have a place to hide when the horde arrives. As you progress deeper however you’ll begin to discover other rare and wonderful materials that can make even better tools and weapons, leading you to delve even deeper underground in order to find those precious resources.
However whilst the basic idea extends to only surviving through the night there’s the entire meta game of creating almost anything you can think of within the Minecraft world. The world’s resources are pretty much at your disposal and their block like nature means you can build almost anything out of them. This has lead to many people building extremely ornate structures within Minecraft, ranging from simple things like houses right up to the Starship Enterprise. As with any sandbox game I took the opportunity for absurdity as far as I could imagine it at the time building a 1 block wide tall spire high up into the clouds where I mounted my fortress of evil.
All that’s missing is an Eye of Sauron.
The basic game mechanic of Minecraft has a dinstinctly MMORPG feel to it. You start out by cutting down trees for wood so you can make a pick axe to mine cobblestone. You then use the cobblestone to make better tools in order to mine iron. You then use the iron to mine other resources like gold, diamond and redstone. Much like the gear grind that all MMORPGs take you through before you’re able to do the end content Minecraft gets you hooked in quickly with the first few resource levels passing quickly. Afterwards it’s a much longer slog to get the minerals you require to advance, usually requiring you to dig extremely deep to find them. Like any MMORPG though this mechanic is highly addictive, leading me to lose many hours searching for the next mineral vein so that I can craft that next item.
After the first week however I started to grow tired of the endless mining that didn’t seem to be going anywhere. I had dug all the way down to bedrock and had found numerous rare resources but seemed to be lacking the one mineral I needed to harvest them: iron. Googling around for a while lead me to figure out that I was digging far too deep to find much iron and that the best place to find resources was in randomly generated dungeons or caves, basically pre-hollowed out sections of the map that were always teaming with resources (and zombies). After randomly digging for a while I started hearing the distinctive zombie groan and I followed it to the ultimate prize.
Oh yeah, that’s the good stuff!
Exploring this find lead me onto a string of caves all containing the resources I needed to progress further and I was hooked again. Whilst the last few hours I’ve spent with Minecraft have focused more on extending my fortress of evil and the surrounding area I still find myself often taking a trip down into the mines in the hope of coming across another cave or mineral vein as the excitement of finding one is on par with getting some epic loots in a MMORPG. I also set about setting up a Minecraft server so that I could play along with some of my more dedicated Minecraft friends although with a server fan dying I’ve had to put that on hold until I can ensure that it won’t overheat with more than one person playing on it.
Would I recommend this game? Most definitely, especially if you’re the type that enjoys sandbox style games that allow almost unlimited creativity. I was the kind of person who lost hours in Gary’s Mod, making whacky contraptions and using them to unleash untold torment onto hordes of Half Life’s NPCs. The tables are very much turned in Minecraft’s world but it’s just as enjoyable and I have no doubt that anyone can lose a few good hours in it just exploring the retro world that Minecraft generates for you. The game is still technically in beta but for the price they’re asking it’s well worth the price of admission.
Minecraft is available for PC and web browser right now for a free trial or AU$20. Game was played on a local single player instance for the majority of game time with an hour or so spent on a multiplayer server. No rating is being assigned to this game as it’s still in beta.
Betas are a tricky thing to get right. Realistically when you’re testing a beta product you’ve got a solid foundation of base functionality that you think is ready for prime time but you want to see how they’ll fair in the wild as there’s no way for you to catch all the bugs in the lab. Thus you’d want your product to get into the hands of as many users as you possibly could as that gives you the best chance to catch anything before you go prime time. Many companies now release beta versions of upcoming software for free to the general public in order to do this and for many of them it’s proven to work quite well. However more recently I’ve seen beta testing used as a way to promote a product rather than test it and the main way they do that is through artificial scarcity.
Rewind back to yonder days of 2004 and you’ll find me happily slogging away at my various exploits when a darkness forms on the horizon: World of Warcraft. After seeing many of the game play videos and demos before I was enamoured with the game long before it hit the retail shelves. You can then imagine my elation when I found out there was a competition for a treasured few closed beta invitations and not 10 minutes later had I entered. As it turns out I got in and promptly spent the next fortnight playing my way through the game and revelling in the new found exclusivity that it had granted me. Being a closed beta tester was something rather special and I spoke nothing of praise to all my friends about this upcoming game.
Come back to the present day and we can make parallels to the phenomenon that is #newtwitter. Starting out on the iPad as the official Twitter Client #newtwitter is an evolution in the service that Twitter is delivering, offering deeper integration with services that augment it and significantly jazzing up the interface. Initially it was only available to a select subset of the wider Twitter audience and strangely enough most of them appeared to be either influential Twitter users or those in the technology media. The reviews of the new Twitter client were nothing short of amazing and as the client has made its way around to more of the waiting public people have been more than eager to get their hands on it. Those carefully chosen beta testers at the start helped formed a positive image that’s helped keep any negativity at bay, even with their recent security problems.
This is in complete contrast to the uproar that was felt when Facebook unveiled its new user interface at the end of last year. Unlike the previous two examples the new Facebook interface was turned on all at once for every single user that visited the site. Immediately following this millions of users cried out in protest, despising the new design and the amount of information that was being presented to them. Instead of the new Facebook being something cool to be in on it proved to be enough of an annoyance to a group of people to cause a stir about it, rather than sing its praises.
The difference lies in the idea of artificial scarcity. You see there really wasn’t anything stopping Blizzard or Twitter from releasing their new product onto the wider world all at once as Facebook did however it was advantageous to them for numerous reasons. For both it allowed them to get a good idea of how their product would work in the wild and catch any major issues before release. Additionally the exclusivity granted to those few souls who got the new product early put them on a pedestal, something to be envied by those who were doing without. Thus the product that was already desirable becomes even more so because not everyone can have it. Doing a gradual release of the product also ensures that that air of exclusivity remains long after it’s released to the larger world as can be seen with #newtwitter.
I say all this because honestly, it works. As soon as I heard about #newtwitter I wanted in on it (mostly because it would be great blog fodder) and the fact that I couldn’t do anything to get it just made me want it all the more. I’ve also got quite a few applications on my phone that I signed up for simply because of the mystery and exclusivity they had, although I admit the fascination didn’t last long for them. Still the idea of a scarce product seems to work well even in the digital age where such restrictions are wholly artificial. Just like when say someone posts a teaser screenshot on Facebook sans URL to an upcoming web application.
I’m sure most of you knew what I was up to anyway 😉
So as those who have been following me on Twitter may have known I’ve spent the last 2 weeks schooling myself in the world of Silverlight and Microsoft Rich Internet Application (RIA) services. Now I’m no stranger to the idea of n-tier application development but Microsoft’s implementation appears woefully complicated when you first get into it (thanks to the lack of clear tutorials and documentation on it) but becomes quite simple once you get past that first hurdle. Many things you think you’ll have to code up substantial amounts of logic for, say saving changed objects to the database, are handled for you by some black magic hidden in the background. I’m not complaining though as whilst I believe that an understanding of what is happening behind the scenes is vital for writing good code actually implementing that every time you want to do something could be quite a chore.
It actually reminds me of another project I started a long time ago called Yurai (it’s a desktop application so you probably won’t ever get to see it in The Lab). Back when I was working as a help desk monkey and finishing off my last year of my degree the whole n-tier design pattern was firmly lodged in my head and I got the idea that the software we were using (called Infra, now owned by VMware of all companies) was far too bloated and I could make a substitute myself. Coincidentally a friend of mine had just started his own business in home IT service and was using a paper pad to track their jobs. I took it upon myself to code him up an application and my first foray into the world of being a real developer began.
The application itself never got past the initial design phase. Whilst I did manage to (manually) create all 3 tiers with their associated logic and what not the system itself only allowed a small subsection of the functionality they would require. With my university commitments ramping up I never got time to finish the project and it now sits in a backup folder on one of my many hard drives. Still thinking back to those days I can see how far Microsoft have come in making it so easy for an average-skilled developer like myself to develop these applications in a rather timely fashion. The same amount of time invested back then yielded about 10% of the results meaning less time is spent coding the rudimentary parts of the system and more time focusing on what’s critical to your application.
So the last 2 weeks of work have culminated in this: a working user authentication system for Geon. Not only that if you click the link (might be a bit obscured on monitors less than 1680 pixels wide, I’ll fix that this afternoon) at the top you can sign up for an account to use with Geon. What that account will let you do is save your feeds so that next time you login you don’t have to go clicking around again to set it all up, just make sure to hit the Logout button to do so (need to implement the logout function on window close, haven’t done so yet!). An account is not required to use Geon but in the future I’ll be adding a lot more things to it that will require an user account, and who knows I might give you something special for beta testing my stuff out 😉 Your account will need to be approved by me before you’ll be able to use it however, and that’s just to make sure I don’t get a flood of people signing up before I’m ready to let the user auth system go live.
But don’t let that stop you from signing up. Go on you know you want to.
Hopefully with that part out of the way the core functionality of Geon will come along soon. What I’m referring to is the idea that I originally had was to be able to ask anyone in a certain area a question and have them respond back with text/image/video/whatever. This of course relies on people actually running my application and with it currently restrained to the browser that makes the potential audience somewhat limited but it can still work as a test bed for the handset applications. There’s going to be a lot of messing about to get that all harangued in (I’ll have to undo some of the black magic that Microsoft has done for me thus far to make sure its secure) but that’s all part of the fun, well that’s what I’m telling myself anyway.
Additionally I’ll have a tutorial up somewhere on this blog (I’ll update this post with a link) on how to get started using Geon as I’ve had a few people tell me that it doesn’t work only to find out that they’ve been clicking in ways I didn’t expect. That’s partly my fault for changing the UI on them and not making it clear that it didn’t work the way it used to, but if I take a leaf out of Google’s book that’s what users are for, trying out your beta code so you don’t have to do as much testing yourself 😉
So as always hit up Geon and let me know what you think by posting a comment below, tweeting me or sending me an email at [email protected].
EDIT: As promised I’ve created a new page with a quick rundown (with pictures!) of how to get going with Geon.
One of the biggest struggles that the software industry faces is that of the not-so-underground pirate market. Whilst this used to be confined to certain countries and small close social groups over the years more and more we’re seeing piracy becoming more mainstream. Gone are the days when only the technically elite had the means and motivation to copy untold millions of dollars worth of software and we now herald the days when anyone with a quick google search and a hunger for something free can get what they want.
So what can you do in a market where people will have your product despite having not paid for it? Simple, convert those people (who would probably not buy your software anyway even if it was “unpiratable”) into your unruly mass of beta testers. How would you go about something like this? Well Microsoft certainly has a novel way of recuriting beta testers:
The Release Candidate is now available to MSDN and TechNet subscribers, and will go on unlimited, general release on 5 May.
The software will not expire until 1 June 2010, giving testers more than a year’s free access to Windows 7.
“It’s available to as many people who see fit to use it, although we wouldn’t recommend it to just your average user,” John Curran, director of the Windows Client Group told PC Pro. “We’d very strongly encourage anyone on the beta to move to the Release Candidate.”
Being a beta tester of Windows 7 myself I can attest to the high build quality of the current release, and if the previous builds are any indication the RC will be a very polished operating system. This is the kind of thing that could lure those devilish pirate users away from their current installs of Windows, which suffer from not being able to patch or download Microsoft value-add software, onto a new system where they’re basically a fully paid Microsoft customer. Not to mention some of the other perks from other companies offering things like free antivirus, yet again another perk from something that’s completely free.
Another bit of evidence that seems to lend credence to this theory is the fact that even months after Microsoft pulled the keys from their Windows 7 registration site the torrent for the latest build still remains up for all to download and play with. Whilst you may take the risk of downloading a pre-loaded trojan Microsoft was kind enough to provide a SHA-1 hash of the builds for everyone allowing you to verify that your downloaded file is genuine. It also takes a bit of load away from Microsoft, who should have considered releasing an official torrent in the first place.
So what do they have to lose by switching across? For the most part they might have some issues with their legacy bits of software and possibly hardware incompatibility issues. When I first installed Windows 7 most of my hardware had drivers already available for Windows 7 and if they failed the Vista drivers worked (albeit with a few tweaks). Since they are now technically customers of Microsoft they can ask for support for their problems, something which before would probably involve them trolling through endless web searches hoping someone else had their issue.
Doing this kind of long beta is however a double-edged sword. As many software developers have found when you provide your software ahead of time to the general public this always gives the hackers and crackers a head start on your copy protection mechanisms¹. By the time Windows 7 hits the stores the activation scheme will be well known and Microsoft will be a step behind in the ever raging arms race with the pirates. It also takes away from a lot of the hype about the product, since everyone who would be buying this product would probably already have it installed.
For Microsoft this is making the best of a bad situation, and overall it’s a good move for them. Whilst the rate wouldn’t be high I’m sure there were some people running a previous (pirated) version of Windows that will consider forking over some cash for the new version once they’ve played with it for a year. Additionally the corporate sector will have a long time to prepare for Windows 7, easing the transition pain some what.
I know I’ll be running it for the coming year 🙂
¹ Whilst I can’t find a good link on one of the techniques I used to hear of I’ll attempt to explain it here. Many game development companies would provide a demo or trial version a few weeks before official release in order to generate a bit of hype. Usually this would involve a lot of the production code and most of the time this wouldn’t contain the DRM or genuine copy verification mechanisms in it. Many would be hackers would then use the files in the demo to create cracks for the retail versions, sometimes by just simply copying the main executable from the trial over the top of the retail version.
In this rapidly changing technologically driven world many new up and comers find it hard to differentiate themselves from amongst the hundreds of similar projects. In an effort to drive people to use their services we’re seeing more and more companies going the route of providing some or all of their products completely free to the end user. Whilst I believe this is a great idea there is, of course, always some catches when it comes to accepting free gifts from corporate overlords.
A great example I can think of is the good old de facto corporate communication device, the Crack(Black)Berry. Recently at my current gig for the Australian government my department decided to do a trial of these in order to see if there was any value in implementing it. Of course Telstra comes to the table offering a free 3 month trial with pretty much everything included. The handsets were sent out to the executives and we went through about 2 days of configuration work to get it all done for them. It didn’t matter that we’d already installed Exchange Activesync, which would allow them to use any Windows Mobile device and wouldn’t cost them a cent since we’d bought the license in a bundle. So since the Blackberrys had been in the Qantas lounge magazines we were basically stuck with trialling this technology for them, and we all knew where it was going.
Fast forward to the end of the trial and we have half the execs praising the new system, a few dissenters and the rest on the fence. It was pretty obvious from the onset that once this was in place they would not give it up, even though the corporate directive is to investigate all possible solutions and judge them on their merits.
The same situation has been used in many different situations with online services. LinkedIn used to be a completely free service for professional social networking, and it did a great job at that. It was basically a no frills Facebook, something which is handy when you’d be browsing it at work. Of course the creators saw that they could then add in extra features and offer them as premium accounts, something which is akin to buying an expensive car in real life. Sure, it will probably improve people’s impression of you (if they’ve never met you before) but past that it’s value is rather small. Since many people use LinkedIn in order to build a professional network and hopefully generate business from that the paid services might hold some value there. There’s still no substitute for good old fashioned real life networking though, but that doesn’t stop people from trying to charge for that, either.
However, there are those that still buck the trend when it comes to providing services for free and staying away from the premium service charge. Google has released service after service that, whilst most of them still carry the beta tag on them, remain free after many years in service. This can all be put down to their ruthless precision in refining down an advertising model that appeals to every business, which is built upon their solid leadership as a search engine.
In reality most new up and coming technologies these days are being offered as a free baseline with the additional features costing you a couple pennies more. It’s all done to drive up market adoption and it is a great thing for the consumers, who get a lot more for their dollars since they can try before they buy. Just don’t be too shocked when your favourite free service starts asking for your credit card 😉