Posts Tagged‘product’

Don’t Get Me Wrong, Kickstater is Great, But…

The idea behind Kickstarter is a great one: you’ve got an idea and you’ve got the fixins of a potential business going but the financial barrier of bringing it to market are keeping you from seeing it through. So you whip up a project on there, promise people rewards or (more commonly) the actual product you’re intending to sell and then wait for backers to pledge some cash to you. For the backers as well its great as if the project doesn’t get fully funded then no one has to donate any money, so your potential risk exposure is limited. Of course Kickstarter take their slice of the action, to the tune of 5% (plus another 3~5% for the payment processing) so everyone comes out a winner.

It’s a disruptive service, there’s no denying that. There are many products that wouldn’t have made it through a traditional venture capital process that have become wild successes thanks to Kickstarter. This of course gets people thinking about how those traditional systems are no longer needed, I mean who needs venture capitalists when I can get my customers to fund my project? Well whilst I’d love to believe that all we need for funding is crowdsourcing tools like Kickstarter I can’t help but notice the pattern of most of the successful endeavours on there.

They’re all done by people who were already successful in the traditional business world.

Take for instance the latest poster child for the success of Kickstarter: The Double Fine Adventure. For gamers the Double Fine name (and the man behind it, Tim Schafer) is a recognizable one, having worked on such cult classics as The Secret of Monkey Island, Grim Fandango and releasing others such as Psychonauts and Brutal Legend. Needless to say he’s quite well known and made his name in the traditional game developer/publisher world. Kickstarter has allowed him to cut the publishers out of this particular project, putting more cash in his pocket and allowing him total control of it, but could someone without that kind of brand recognition pull off the same level of success?

The answer is no.

For all the successes that are seen through Kickstarter only 44 percent of them will ever actually get the funding they require. Indeed in the Video Games category the highest funded game (there are a lot of projects in there that aren’t exactly games) before the Double Fine Adventure managed about $72,000. Sure it’s nothing to sneeze at, it was almost 6 times what they needed, but it does show the disparity between relative nobodies attempting a to crowdfund a project and when a well known person attempts the same thing. Sure there are the few breakout successes, but for the majority of large funding successes you’ll usually see someone who’s already known in that area involved somehow.

Now I don’t believe this is a bad thing, it’s just the way the process works. Nothing has really changed here, except the judgement call is shifted from the venture capitalists to the wider public, and as such many of the same factors influence if, when and how you get funded. Name recognition is a massive part of that, I mean just take a look at things like Color that managed to pull in a massive $41 million in funding before it had even got a viable product off the ground just because of the team of people that were behind the idea. Kickstarter doesn’t change this process at all, it’s just made it more visible to everyone.

Does this mean I think you should keep away from Kickstarter? Hell no, if you’ve got a potential product idea and want to see if there’s some kind of market for it Kickstarter projects, even if they’re not successful, are a great way of seeing just how much demand is out there. If your idea resonates with the wider market then you’re guaranteed a whole bunch of free publicity, much more than what you’d get if you just approached a bank for a business loan. Just be aware of what Kickstarter does and does not do differently to traditional ways of doing business and don’t get caught up in the hype that so often surrounds it.

Sometimes You Have to Ignore Your Users.

My mum isn’t the most technical person around. Sure she’s lived with my computer savvy father for the better part of 3 decades but that still doesn’t stop her from griping about new versions of software being confusing or stupid, much like any regular user would. Last night I found out that her work had just switched over to Windows 7 (something I’ve yet to do at any office, sigh) and Office 2010. Having come from XP and Office 2003 she lamented the new layout of everything and how it was impossible to get tasks done. I put forth that it was a fantastic change and whilst she might fight it now she’ll eventually come around.

I didn’t do too well of convincing her that, though 😉

You see when I first saw Vista I was appreciative of the eye candy and various other tweaks but I was a bit miffed that things had been jumbled around for seemingly no reason. Over time though I came to appreciate the new layout and the built in augmentations (start menu search is just plain awesome) that helped me do things that used to be quite laborious. Office 2007 was good too as many of the functions that used to be buried in an endless stream of menu trees were now easily available and I could create my own ribbon with my mostly used things on it. Most users didn’t see it that way however and the ribbon interface received heavy criticism, on par with that leveled at Vista. You’d then think that Microsoft would’ve listened to their users and made 7 and office 2010 closer to the XP experience, but they didn’t and continued along the same lines.

Why was that?

For all the bellyaching about Vista it was actually a fantastic product underneath. Many of the issues were caused by manufacturers not providing Vista compatible drivers, magnified by the fact that Vista was the first consumer level operating system to support 64 bit operation on a general level (XP64 was meant for Itaniums). Over the years of course drivers matured and Vista became quite a capable operating system although by then the damage had already been done. Still it laid the groundwork for the success that Windows 7 has enjoyed thus far and that will continue long after the next iteration of Windows is released (more on that another day ;)).

Office 2010 on the other hand was a different beast. Microsoft routinely consults with customers to find out what kind of features they might be looking for in future products. For the past decade or so 80% of the most requested features have already been in the product for a while, users just weren’t able to find them. In order to make them more visible Microsoft created the ribbon system, putting nearly all the features less than one click away. Quite a lot of users found this to be quite annoying since they were used to the old way of doing things (and many old shortcuts no longer worked) but in the end it’s won over many of its critics showcased by its return in 2010.

What can this experience tell us about users? Whilst they’re a great source of ideas and feedback that you can use to improve your application sometime you have to make them sit down and take their medicine so that their problems can go away. Had Microsoft bent over to the demands of some of their more vocal users we wouldn’t have products like Windows 7 and Office 2010 that rose from the ashes of their predecessors. Of course many of the changes were initially driven by user feedback so I’m not saying that their input was completely worthless, more that sometimes in improving a product you’ll end up annoying some of your loyal users even if the changes are for their benefit.

Focused Simplicity.

It’s really easy to fall into the trap of trying to build something you think is simple that ends up being a complicated mess. Us engineers are amongst the most common offenders in this regard, often taking a simple idea and letting the feature creep run out of hand until the original idea is coated in 10 layers of additional functionality. I’d say that this is partly due to our training as modular design and implementation was one of the core engineering principles that was drill into me from day 1 although to be fair they also taught us how quickly the modular idea fell apart if you took it too far. There’s also the innate desire to cram as much functionality as you can into your product or service as that would make it appear more appealing to the end user, however that’s not always the case.

When Geon was starting out I had a rough idea of what I wanted to do: see what was going on in a certain location. That in itself is a pretty simple idea and the first revisions reflected that, although that was probably due to my lack of coding experience more than anything else. As time went on I got distracted by other things that forced me away from my pet project and upon return I had one of those brainwaves for improving Geon in ways I had not yet considered. This lead to the first version that actually had a login and a whole host of other features, something I was quite proud of. However it lacked focus, was confusing to use and ultimately whilst it satisfied some of the core vision it wasn’t anything more than a few RSS feeds tied together in a silverlight front end with a badly coded login and messaging framework hidden under the deluge of other features.

Something needed to change and thus Lobaco was born.

Increasingly I’m seeing that simplicity is the key to creating an application that users will want to use. On a recent trip to Adelaide my group of friends decided to use Beluga to co-ordinate various aspects of the trip. Beluga really only does one thing, group messaging, but it does it so well and in such a simple way that we constantly found ourselves coming back to it. Sure many of the functions are already covered off by say SMS or an online forum but having a consistent view for all group members that just plain worked made organizing our band of bros that much easier. It’s this kind of simplicity that keeps me coming back to Instagr.am as well, even though there’s similar levels of functionality included in the Twitter client (apart from the filters).

Keeping an idea simple all sounds like it would be easy enough but the fact that so many fail to do so show how hard it is to refine a project down to its fundamental base in order to develop a minimum viable product. Indeed this is why I find time away from developing my projects to be nearly as valuable as the time I spend with them as often it will get me out of the problem space I’ve been operating in and allow me to refine the core idea. I’ve also found myself wanting simple products in lieu of those that do much more just because the simple ones tend to do it better. This has started to lead me down the interesting path of finding things I think I can do better by removing the cruft from a competing product and I have one to test out once I get the first iteration of the Lobaco client out of the way.

I guess that will be the true test to see if simplicity and focus are things customers desire, and not just geeks like me.

Norton Internet Security 2011: My How Things Have Changed.

It’s been a long time since I used a Norton product. Way back when I had just started working for Dick Smith Electronics I can remember happily recommending their products to nearly every customer that walked through the door and rarely did I get any complaints back from them. That all changed when I moved onto actually fixing people’s computers where upon I discovered that Norton’s latest incarnation (then 2004) was actually worse than the problems it was trying to solve. So many times I’d fully clean up a PC only to have it bog down again when I put Norton back on so you can imagine my scepticism when I was approached to review their latest version, Norton Internet Security 2011. Still I thought that they couldn’t have continued on if their product range continued down the path they had all those years ago so I decided to give it a go to see how far (or not) they had come.

Still I wasn’t entirely ready to risk my main machine with this so I fired up a Windows 7 virtual machine on my server and began the installation process on it. Installing Norton took just under 10 minutes, including the time it took to download the updates. Interestingly the installer updated itself before attempting to install on my system which is definitely a welcome change from updating afterwards. Doing so before installation means that Norton should be capable of detecting threats that might try to subvert the installation process, if you’re trying to clean an already compromised system. Unfortunately before the install will complete you have to provide your registration key, meaning there’s no free trial should you want to give your friends the software to trial before they buy it. Still the retail copy allows you to protect up to 3 PCs for the one purchase, enough to cover most households. Part of the installation process will also ask if you want to participate in the Norton Community which I’d definitely recommend you do (more on this later).

The user interface is a worlds away from the Norton that I remembered. The main screen is very well laid out with all the needed features available right on the main screen, I rarely had to dig more than one or two layers deep to find a setting I was looking for. The map at the bottom of the screen shows the recent cyber crime incidents across the world (although how they define this is a bit of a mystery) and is pretty cool to watch as ticks slowly over the past 24 hours. By itself though it doesn’t really add much value for the regular user apart from possibly piquing their curiosity about the events.

At this point a regular user could close the program and leave it at that since everything else is taken care of automatically by Norton Internet Security. This was why I used to recommend Norton products to people as they required the least amount of intervention from users to ensure that they kept working as intended. For the super and power users however there’s a fair bit more value that can be unlocked if you want to go digging a little deeper into Norton Internet Security, as I’ll show you below.

Before I get into the guts of this program let me talk about the performance of this application. Talk to any long time Windows administrator and they’ll tell you that anti-virus programs can be some of the most performance degrading applications you can install on your PC. This isn’t through any fault of their own, more it’s because to provide the maximum level of security they have to be constantly active, ensuring they’re ready for any incoming threats. Norton used to be the worst of the lot in this regard often bringing top of the line equipment to its knees in order to keep it safe.

Norton Internet Security 2011 however has progressed quite significantly since my encounters with its previous incarnations. Keen readers would’ve noticed that the main screen of Norton had a Performance link on it which reveals the screen shown above. The period shown before the two large spikes was completely idle and you can see that Norton does a good job of keeping its resource usage low during these periods. The two large spikes are from me performing a scan across about 600GB of data and doing that will use up most of your available system resources whilst the scan is running its course. This isn’t unique to Norton however and the scanning itself was quite quick, taking just under an hour to complete. The System Insight section provides an overview of what has been happening on your system over the past month. For an administrator like me such information can be quite valuable especially when trying to diagnose when some problem may have originated.

The meat of any AV program however is in its ability to catch potential problems before they can do any harm, which Norton Internet Security seems quite capable of doing.

The EICAR file is a virus test file designed to trigger any AV product. Upon downloading it I was greeted with a little pop up in my browser that said it was scanning the file for viruses and not too long after I was presented with this. As you can see not only does Norton identify the file and remove it before it has a chance to inflict any damage it also provides a wealth of information about the potential threat it removed from your system. This is where the power of the Norton Community comes in as it provides you with some idea about how widespread a threat might be and what it might do to your system if it was infiltrated. This kind of information is great for empowering users making them aware of what’s happening and hopefully educating them to avoid such things in the future. Most users probably won’t take advantage of this but it’s still quite useful for power users or system administrators.

The feature even extends to running processes which becomes quite handy for something you might be suspicious of but aren’t quite sure about. Again this kind of information might not be particularly useful to the user directly but it could prove quite valuable to administrators or super users attempting to troubleshoot issues.

The second feature set is the network protection section which encompasses two interesting features: Vulnerability Protection and the Network Security Map.

Vulnerability protection is an interesting idea. In essence Norton Internet Security can protect against flaws in particular programs, preventing the exploit from working. Whilst the vast majority of these exploits have been patched not all users are rigorous with their updates and Norton can help cover the gap for them. Additionally this also allows Norton to respond to threats quite quickly, nullifying their effects whilst the software vendors work on releasing a patch. Since there’s usually a month between patch cycles this feature goes a long way to securing a user against imminent threats that they might not even be aware of.

The network security map gives you a broad overview of the network you’re on and the other devices connected to it. This kind of thing can be helpful for users who are on public internet connections and want to be sure that their safe. Whilst this can’t detect any of the advanced threats (like a compromised access point running a man in the middle attack) it does give the users some much needed guidance on when they should and shouldn’t be doing things over a public connection. The information on other hosts is interesting too as its basically an IP and port scanner. Normal users probably won’t care about the information contained in here but after the hassle I went through to spoof a MAC address for free wifi in Los Angeles this kind of thing is quite valuable (if for all the wrong reasons ;)).

Lastly there’s the Web Protection section which contains an identity safe, credit card store and a parental controls section. Whilst there are already many password saving solutions out there the fact that Norton includes one is a good step towards improving a user’s security. Using a password store means that should you be compromised with a keylogger a malicious attacker won’t be able to get ahold of your passwords when you type them in. Sure there’s the possibility they’ll crack the store but it’s another layer of security that can help reduce the impact of a compromised system. The same can be said for the credit card store as whilst credit card details are one of the few things you don’t want to store anywhere on your computer the use of this store provides similar benefits to that of the password safe.

I didn’t get into the parental controls section much as it was very much geared towards fretting parents who require fine grained control over their child’s online experience. It provides all the useful goodies of being able to see what you’re kids are doing online and creating rule sets for browsing but probably the most useful part of it would be the online resources for educating children on safe web behaviour. Personally I’m a fan of keeping the PCs in a communal area and being an active online participant yourself instead of trying to approach the problem at arms length with tools like this. Still it wouldn’t be in the product if the users hadn’t been begging for it so I’m sure many users will appreciate its inclusion.

To be honest I went into this review with a great deal of scepticism, thinking that Norton wouldn’t have changed their sinful ways despite their continued existence. I’m glad to say that my experience with their latest product, Norton Internet Security 2011, changed all that and they’ve delivered a program I wouldn’t hesitate to recommend and use myself. Harnessing the power of their large user base in order to empower them with the information they gather is an excellent way to improve security and for power users like me it’s something that will give me just that little bit of an edge when dealing with unknown issues. Before I reviewed this product I didn’t think I’d need to pay for anti-virus ever again as things like Microsoft Security Essentials covered all the required functionality. Now however I can now see the vast difference between a paid product like this and their free cousins and I couldn’t bring myself to say that buying Norton Internet Security would be money wasted any more. If you’re looking for a paid anti-virus product with a wealth of features you wouldn’t go wrong with Norton Internet Security 2011.

Norton Internet Security 2011 is available from most software stores and online for AU$69.99. A copy of this software was provided to me free of charge for the purposes of reviewing it. All testing was conducted on a Windows 7 virtual machine running on VMware ESXi with 2 vCPUs, 2GB RAM and a 40GB HDD.

From Beta to Bonne: Artificial Scarcity is the Key.

Betas are a tricky thing to get right. Realistically when you’re testing a beta product you’ve got a solid foundation of base functionality that you think is ready for prime time but you want to see how they’ll fair in the wild as there’s no way for you to catch all the bugs in the lab. Thus you’d want your product to get into the hands of as many users as you possibly could as that gives you the best chance to catch anything before you go prime time. Many companies now release beta versions of upcoming software for free to the general public in order to do this and for many of them it’s proven to work quite well. However more recently I’ve seen beta testing used as a way to promote a product rather than test it and the main way they do that is through artificial scarcity.

Rewind back to yonder days of 2004 and you’ll find me happily slogging away at my various exploits when a darkness forms on the horizon: World of Warcraft. After seeing many of the game play videos and demos before I was enamoured with the game long before it hit the retail shelves. You can then imagine my elation when I found out there was a competition for a treasured few closed beta invitations and not 10 minutes later had I entered. As it turns out I got in and promptly spent the next fortnight playing my way through the game and revelling in the new found exclusivity that it had granted me. Being a closed beta tester was something rather special and I spoke nothing of praise to all my friends about this upcoming game.

Come back to the present day and we can make parallels to the phenomenon that is #newtwitter. Starting out on the iPad as the official Twitter Client #newtwitter is an evolution in the service that Twitter is delivering, offering deeper integration with services that augment it and significantly jazzing up the interface. Initially it was only available to a select subset of the wider Twitter audience and strangely enough most of them appeared to be either influential Twitter users or those in the technology media. The reviews of the new Twitter client were nothing short of amazing and as the client has made its way around to more of the waiting public people have been more than eager to get their hands on it. Those carefully chosen beta testers at the start helped formed a positive image that’s helped keep any negativity at bay, even with their recent security problems.

This is in complete contrast to the uproar that was felt when Facebook unveiled its new user interface at the end of last year. Unlike the previous two examples the new Facebook interface was turned on all at once for every single user that visited the site. Immediately following this millions of users cried out in protest, despising the new design and the amount of information that was being presented to them. Instead of the new Facebook being something cool to be in on it proved to be enough of an annoyance to a group of people to cause a stir about it, rather than sing its praises.

The difference lies in the idea of artificial scarcity. You see there really wasn’t anything stopping Blizzard or Twitter from releasing their new product onto the wider world all at once as Facebook did however it was advantageous to them for numerous reasons. For both it allowed them to get a good idea of how their product would work in the wild and catch any major issues before release. Additionally the exclusivity granted to those few souls who got the new product early put them on a pedestal, something to be envied by those who were doing without. Thus the product that was already desirable becomes even more so because not everyone can have it. Doing a gradual release of the product also ensures that that air of exclusivity remains long after it’s released to the larger world as can be seen with #newtwitter.

I say all this because honestly, it works. As soon as I heard about #newtwitter I wanted in on it (mostly because it would be great blog fodder) and the fact that I couldn’t do anything to get it just made me want it all the more. I’ve also got quite a few applications on my phone that I signed up for simply because of the mystery and exclusivity they had, although I admit the fascination didn’t last long for them. Still the idea of a scarce product seems to work well even in the digital age where such restrictions are wholly artificial. Just like when say someone posts a teaser screenshot on Facebook sans URL to an upcoming web application.

I’m sure most of you knew what I was up to anyway 😉

Microsoft’s Social Network: Curiously Absent?

There’s no question now that the hot thing for any company to do is to make some kind of software that has a social component to it, and why wouldn’t you. If your product is based around friends (and not really friends) interacting with each other then the marketing really does itself, so long as your product is somewhat useful or novel. It’s getting to the point where once a service has been around for a while they will inevitably either integrate with Facebook or build in their own social networking components, usually to keep driving the user numbers upwards. No company seems to be immune to this, even my fledgling little application allows you to login via Facebook, except for one: Microsoft. Despite the social revolution that seems to be rampaging on around them Microsoft has quietly kicked back letting others duke it out for social supremacy. For a company that’s renowned for throwing money around in order to gain market share in pretty much every IT related area their silence on the social scene is quite eerie, verging on the point of them knowing something the rest of us don’t.

For the most part their strategy seems to have been one of going along with the current trend of integrating their products with the current social giants. Their MSN Messenger product was just recently updated with a new beta that had Facebook integration. Already it’s garnered a healthy 4.6 million users or approximately 1% of Facebook’s user base. That might not sound like much but considering that it’s still in beta and the current incarnation of the Live product has well over 330 million users you can expect that a lot of people are going to be getting their Facebook fix from Microsoft. Additionally many Outlook users would be familiar with their new Social Connector which is in essence a social network for businesses and has been getting some traction due to its integration with Sharepoint and the Office suite of products.

Still there’s no Microsoft Social Network (MSN? Ha!) to be found, so what’s the deal?

Part of the answer would seem to lie in the past. Rewind back about 3 years and you’ll come across a flurry of articles speculating on a bidding war between Microsoft and Google for a piece of the next hottest thing: Facebook. Surprisingly enough Microsoft won out in the end managing to secure a small share of the company for a cool $240 million, or 1.6% stake. This was a continuation of the relationship that they had established previously when Microsoft secured an advertising deal with Facebook just one year earlier. Still it was an odd move for Microsoft as the investment was peanuts for them (They had over $23 billion in cash on hand, yes cash) and realistically even if the company went to IPO and they got a 10x exit from it you’re still only looking at $2.4 billion dollars for a company who turns that over in about 2 weeks, so it was more a foot in the door than anything else. Their recent integration activities with Facebook also shows that they’re more keen to work with them rather than try to push them out of the market.

Strangely enough it looks like Microsoft actually did try to compete with Facebook all those years ago. I’ll admit I didn’t know about this when I first starting writing this post, I came across it in my research, but it appears that in response to Facebook going open to all back in 2006 Microsoft retaliated by launching their own site Wallop:

Seattle-based Microsoft Corp.‘s (NASDAQ: MSFT) spin off Wallopsaid Tuesday it was starting service. It’s a site intended to compete head on with MySpace and Facebook. Wallop starts with $13 million in backing from Microsoft, Norwest Venture Partners, Bay Partners and Consor Capital.

Considering that I’ve never heard of this site it’s not surprising that it never managed to get off the ground. Checking out the wiki page on the service it appears that they left their lofty ambitions behind back in 2008 instead focusing on developing applications for social networks rather than trying to compete with them. This it would seem is the reason behind Microsoft’s curious lack of a real social network. They tried, they failed and then realised that there was more to be done with them than against them. This really is contrary to their normal kind of behaviour and I’m sure there’s an ulterior motive to this that I just can’t figure out.

Taking a wild stab in the dark I’d say that they just don’t think they can take the shine off Facebook’s crown. Microsoft really isn’t the kind of company you expect to make products and services like that, they’re more of an underlying services platform that will deliver those products to you. Considering this is where their main revenue line is drawn from it’s not surprising but it’s still one of the first times where it looks like Microsoft has just thrown in the towel and capitulated to the competition. It will be interesting to see how this maneuver pays off as Google starts to ramp up their efforts in the social space with a rumoured Google Me service starting to make waves on the Internet. I still think Microsoft will hang back on that one too, but there’s every chance they’re waiting for the market to segment a bit before attempting to jump back in to the social networking scene.