I’ve been using Windows 8 for a good 6 months now and as someone who’s use all previous Windows versions going back to 3.1 it’s easy for me to say that it’s the best of the lot so far. Sure I don’t use the Metro interface a lot but that’s mostly because it’s not designed for the current platform I’m using it on (a PC that doesn’t have a touch interface). Still it seems I can’t go a day where someone, usually an executive from a large OEM, is bashing Windows 8 in one way or another. Considering that nearly everyone I talk to, including people who aren’t that technically inclined, seems to say the direct opposite of what they say I figured it was something worth looking into.
A lot of the criticisms seem to stem from the awkward launch that Windows 8 had. Now I’m not going to try and be an apologist for this as it’s well known that even Microsoft was disappointed with the initial release. For those of us who endured the Vista launch however it’s pretty obvious why this occurred as whenever a new Windows release deviates heavily from the previous one (whether in terms of interface or underlying architecture) the sales are always lackluster as their biggest customers, the enterprise buyers, don’t want to take the risk until all the teething issues have been sorted out. More crucially though is that whilst the launch might have been an all round disappointment it didn’t take long for Windows 8 to gain some significant steam, getting on par with Windows 7 after 90 days.
Several other high profile people have gone on record saying that the Surface is also seeing lackluster sales. This coming not long after many people have called the ultrabook market a failure (which is not unjustified) makes it look like Windows 8 ‘s introduction can’t have any impact on what looks like a declining PC market. Now I’m not going to argue against those numbers however if you look at past Windows releases, take 7 for instance which was released in Q4 of 2009, you’ll see that whilst there was a small boost (which wasn’t out of line with current trend growths) the previous quarter it was back to where it was before. What this means is that while you’d expect people to be buying a new computer in order to get the latest version of Windows many in fact don’t. This doesn’t come as much of a surprise as the system requirements between Vista, Windows 7 and Windows 8 aren’t that great and indeed any PC bought during the time that these operating systems has been available would be more than capable of running them. Indeed many computers have reached the level of good enough half a decade ago for the vast majority of the population so the lackluster growth isn’t surprising, nor is it anything to worry about in my point of view.
I think the reason for the backlash is due to two reasons, both of which the blame does actually lie with Microsoft. The first is a bit of speculation on my part as I think Microsoft promised a boost in PC sales to the various OEMs in order to get them on board early with Windows 8. This is pretty much par for course when you’re working with OEMs on a new and risky product as otherwise they’ll be waiting until the product catches on before they throw their hat in the ring. Now whilst Microsoft could probably handle Windows 8 not getting a lot of OEM support for a while it would have been likely that Windows 8 wouldn’t have caught up to 7′s sales in the first 90 day period, severely stunting its future growth. Whilst they wouldn’t have a Vista level disaster on their hands it would’ve been much worse than what they’re dealing with now.
Secondly I get the feeling that many of the OEMs aren’t too enthused about the Surface and I don’t blame them. I said a while back that Microsoft needed to keep their product in the premium range in order to not piss off their partners and they’ve done that to some extent however with the exorbitant license cost for OEMs it’s incredibly hard for them to make a comparable tablet for the same cost as the low end Surface RT. This has no doubt generated a bit of animosity towards Microsoft with many OEM executives bashing Surface at every chance they get despite it selling out almost immediately upon release. Whether Microsoft can repair this relationship remains to be seen however as the platform’s long term survivability will be made or broken by their OEMs, just like it has been in the past.
Microsoft took a risk with Windows 8 and by most accounts it appears to be paying off for them, unlike their previous experience with Vista. It might not be the saving grace of the PC industry nor might it be a runaway success in the tablet market however Microsoft is not a company that plays the short term game. Windows 8 is the beginning of a new direction for them and by all accounts it’s creating a solid foundation with which Microsoft can further build on. Future Microsoft releases will then be able to deliver even more capabilities on more platforms than any other ecosystem. This isn’t the first time they’ve been on the back foot and then managed to managed to dominate a market long after it has established itself (Xbox anyone?) and I’d be really surprised if they failed this time around.
I often find myself trusted with doing things I’ve never done before thanks to my history of delivering on these things but I always make people well aware of my inexperience in such areas before I pursue such things. I do this because I know I’m not the greatest engineer/system administrator/coder around but I do know that, given enough time, I can deliver something that’s exactly what they required. It’s actually an unfortunate manifestation of the imposter syndrome whereby I’m constantly self assessing my own skills, wondering if anything I’ve done was really that good or simply the product of all the people I worked with. Of course I’ve worked with people who know they are the best at what they do, even if the reality doesn’t quite match up to their own self-image.
Typically these kinds of people take one of 2 forms, the first one of which I’ll call The Guns. Guns are awesome people, they know everything there is to know about their job and they’re incredibly helpful, a real treasure for the organisation. I’m happy to say that I’ve encountered more of these than the second type and they’re in no small part responsible for a lot of the things that I know today. They are usually vastly under-appreciated for their talents however as since they usually enjoy what they do to such a great extent they don’t attempt to upset the status quo and toil away in relative obscurity. These are the kinds of people I have infinite amounts of time for and are usually the ones I look to when I’m looking for help.
Then there’s the flip side: the Alpha Nerds.
These guys are typically responsible for some part of a larger system and to their credit they know it inside and out. I’d say on average about half of them got to that level of knowledge by simply being there for an inordinate amount of time and through that end up being highly valuable because of their vast amount of corporate knowledge. However the problem with these guys, as opposed to The Guns, is that they know this and use it to their advantage in almost every opportunity they get. Simple change to their system? Be prepared to do a whole bunch of additional work for them before it’ll happen. A problem that you’re responsible for but is out of your control due to other arrangements? They’ll drill you on it in order to reinforce their status with everyone else. I can’t tell you how detrimental these people are to the organisation even if their system knowledge and expertise appears invaluable.
Of course this delineation of Guns and Alpha Nerds isn’t a hard and fast line, there’s a wide spectrum between the two extremes, but there is an inflexion point where a Gun starts to turn Alpha and the benefits to the organisation start to tank. Indeed I had such a thing happen to me during my failed university project where I failed to notice that a Gun was turning Alpha on me, burning them out and leaving the project in a state where no one else could work on it even if they wanted to. Whilst the blame still rests solely on my shoulders for failing to recognise that it still highlights how detrimental such behaviour can be when technical expertise isn’t coupled with a little bit of humility.
Indeed if your business is building products that are based on the talents of said people then it’s usually to your benefit to remove Alpha Nerds from your team, even if they are among the most talented people in your team. This is especially true if you’re trying to invest in developing people professionally as typically Alphas will end up being the de-facto contacts for the biggest challenges, stifling the skill growth of members of the team. Whilst they might be worth 2.5 times of your average performers you’re likely limiting the chances of the team being more productive than they currently are, quite possibly to the tune of much more than what the Alpha is capable of delivering.
Like I said before though I’m glad these kinds of people tend towards being less common than their Gun counterparts. I believe this is because during the nascent stages of someone’s career you’re likely to run up against an Alpha and see the detrimental impacts they have. Knowing that you’re then much more likely to work against becoming like them and should you become an expert in your chosen area you’ll make a point of being approachable. Some people fail to do that however and proceed to make our lives a lot more difficult than they should be but I’m sure this isn’t unique to IT and is innate to organisations both big and small.
One thing that not many people knew was that I was pretty keen on the whole Google TV idea when it was announced 2 years ago. I think that was partly due to the fact that it was a collaboration between several companies that I admire (Sony, Logitech and, one I didn’t know about at the time, Intel) and also because of what it promised to deliver to the end users. I was a fairly staunch supporter of it, to the point where I remember getting into an argument with my friends that consumers were simply not ready for something like it rather than it being a failed product. In all honesty I can’t really support that position any more and the idea of Google TV seems to be dead in the water for the foreseeable future.
What I didn’t know was that whilst Google, Sony and Logitech might have put the idea to one side Intel has been working on developing their own product along similar lines, albeit from a different angle than you’d expect. Whilst I can’t imagine that they had invested that much in developing the hardware for the TVs (a quick Google search reveals that they were Intel Atoms, something they had been developing for 2 years prior to Google TV’s release) it appears that they’re still seeking some returns on that initial investment. At the same time however reports are coming in that Intel is dropping anywhere from $100 million to $1 billion on developing this new product, a serious amount of coin that industry analysts believe is an order of magnitude above anyone who’s playing around in this space currently.
The difference between this and other Internet set top boxes appears to be the content deals that Intel is looking to strike with current cable TV providers. Now anyone who’s ever looked into getting any kind of pay TV package knows that whatever you sign up for you’re going to get a whole bunch of channels you don’t want bundled in alongside the ones you do, effectively diluting the value you derive from the service significantly. Pay TV providers have long fought against the idea of allowing people to pick and choose (and indeed anyone who attempted to provide such a service didn’t appear to last long, ala SelecTV Australia) but with the success of on demand services like NetFlix and Hulu it’s quite possible that they might be coming around to the idea and see Intel as the vector of choice.
The feature list that’s been thrown around press prior to an anticipated announcement at CES next week (which may or may not happen, according to who you believe) does sound rather impressive, essentially giving you the on demand access that everyone wants right alongside the traditional programming that we’ve come to expect from pay TV services. The “Cloud DVR” idea, being able to replay/rewind/fast-forward shows without having to record them yourself, is evident of this and it would seem that the idea of providing the traditional channels as well would just seem to be a clever ploy to get the content onto their network. Of course traditional programming is required for certain things like sports and other live events, something which the on demand services have yet to fully incorporate into their offerings.
Whilst I’m not entirely enthused with the idea of yet another set top box (I’m already running low on HDMI ports as it is) the information I’ve been able to dig up on Intel’s offering does sound pretty compelling. Of course many of the features aren’t exactly new, you can do many of the things now with the right piece of hardware and pay TV subscriptions, but the ability to pick and choose channels would be and then getting that Hulu-esque interface to watch previous episodes would be something that would interest me. If the price point is right, and its available globally rather than just the USA, I could see myself trying it out for the select few channels that I’d like to see (along with their giant back catalogues, of course).
In any case it will be very interesting to see if Intel does say anything about their upcoming offering next week as if they do we’ll have information direct from the source and if they don’t we’ll have a good indication of which analysts really are talking to people who are involved in the project.
Kickstarter was one of those services that faced the typical chicken and egg problem of Internet start ups. As a crowd funding platform its success was born out of the exposure it could bring to potential projects and in the beginning that was essentially nothing. As time went on and crowdfunding became more mainstream Kickstarter then became the portal to get projects funded online and since then we’ve seen the projects transform from being mostly single guys in garages to mutli-discplinary teams looking to launch disruptive technology. Whilst I still believe that Kickstarter doesn’t fundamentally change the rules of the funding game the shift of the value judgement from the entity to the wider world is a big one and one that has seen many products come to life that might not have done otherwise.
Of course as the service and the number of projects has grown over the years it was statistically inevitable that things would start to go wrong. Thankfully the majority of the problems faced by Kickstarter campaigns are usually overly ambitious product designers who under estimate the time it will take to get their product to market leading to delays to their initial time frames. There haven’t been that many outright problems either with failed projects never getting any money (and still being publicly accessible after the fact) and there’s only a handful of projects that vanished into the ether, all apparently due to copyright claims.
Still there were a couple high profile cases of projects being showcased that were little more than a concept that someone wanted to create. Now this is the reason why Kickstarter exists, to get projects like that the funding they need to get over that initial hump, however for physical goods having nothing but a couple product renderings can lead to some serious down the road and there were numerous projects that suffered major delays because of this. There were even notable projects that had a prototype but struggled to scale to meet the demand created by their Kickstarter campaign.
Kickstarter, to its credit, has recognised this problem and recently changed the rules, putting it rather bluntly that Kicksater is not a store.
Looking at the changes the first thing you’d notice is the number of projects that were previously funded that would no longer fly under the new rules. Personally I think its a good thing as requiring an actual prototype means that a project creator will have to have gone through many of the initial hurdles to bring the product to reality and thus won’t be using the Kickstarter funds to do this. It does mean that the barrier to entry for product and hardware categories just went up a few notches but it also means that there’s a much higher likelihood that such products will actually come into existence. The change that puts an end to multiple items is done to ensure another Pen Type-A/Pebble situation doesn’t occur again, although there’s still the potential for that to happen.
I think the changes are overwhelmingly positive and whilst there might be some projects excluded from using Kickstarter as a funding platform there’s still many other crowd funding alternatives that still support projects of that nature. It also helps to make sure people understand the (usually low) risks of using Kickstarter as there’s every chance in the world that the product/service will not be viable and neither Kickstarter nor the project founders are under any obligation to issue refunds for projects that fail after funding. This might be spelt out in no uncertain terms in the fine print when you sign up but anything to make people more aware of what they’re getting themselves into to is a good thing and does wonders for Kickstarter’s reputation.
It hasn’t turned me off the idea, that’s for sure.
I have to admit that I was somewhat sour on the whole Kickstarter idea for quite a long time. Not that I thought it wasn’t viable or anything like that, there are many many projects to prove to the contrary, more that in the age of near instant gratification for nearly anything you can care to dream of the idea of shelling out cash long before a product would ever grace my presence made me…apprehensive. It was also partially due to the fact that I didn’t really need nor want most of the products I saw on Kickstarter, even if they were technically cool. However I’ve recently backed 2 projects that I really wanted to see succeed and both of them I backed at something of a premium level.
The first was the OUYA, the crazy Android games console that could shake up the console market in much the same way that the Nintendo Wii did. Of course it could also easily go the other way as whilst the Kickstarter numbers were impressive they only translate to some 60,000ish consoles which in comparison to any of the 3 current major players is really quite small with most of them selling that number every week for as long as they’re available. As long as the hardware gets delivered to me I will consider it successful as whilst its primary purpose might be gaming it will make a solid media extender for a long time to come thanks to its use of Android as a base operating system.
One that really caught my eye though was Planetary Annihilation. Now game Kickstarters are always fraught with danger as the majority of them will never make their funding goals however whilst Planetary Annihilation didn’t have an explosive day 1 like many high profile projects do it did have consistent funding growth over time. In fact it was only just last week that it reached its seemingly lofty funding goal of $900,000 but it’s steadily been growing ever since. It’s rather contrary to many of the other high profile Kickstarters I’ve seen over the past year or so with many reaching their funding goals early and then staying steady until a last feverish burst before the final deadline. Looking at the way they structured their rewards you can see why this is so.
Most Kickstarters start out with their initial goal and upon getting more funding than they expected will usually try to make an announcement of what they intend to do with the extra funds. Whilst its admirable that many do come up with good ideas it usually comes late in the piece so the stretch goals can’t be used as a carrot for those who were on the edge of funding them or not. Right from the beginning though the guys behind Planetary Annihilation made it clear that they had many additional stretch goals already planned out should they get the requisite funding and, just to make people want to fund them more, kept them secret until previous funding goals had been achieved.
Additionally they continue to add value to the more premium tiers to encourage people to up their pledge level. This means people coming back to check on how the Kickstarter is going will have that little extra incentive to jump up to the next tier and indeed the vast majority of their funding is coming from the $95 and above tiers showing just how effective this can be. Whilst the extra rewards didn’t really mean that much to me (I pledged $250 because I’m one of those crazy collector’s edition nuts) I was definitely happy to see I was getting even more for my money.
With just 11 days to go on this particular project it’ll be interesting to see how many more of the stretch goals the Planetary Annihilation guys can hit before they reach the end of the funding period. In the week since achieving their funding goal they’ve already added on another $200,000 so it’s quite possible that they could hit their next stretch goal without too much trouble. Whether this consistent funding flow builds to a mighty crescendo at the end thought will have to remain to be seen.
I’d definitely recommend backing them though, even if you only spend $20 to get the full game upon release. Some of the guys behind Planetary Annihilation are the same people responsible for Total Annihilation and the first Supreme Commander, two games which took the traditional RTS idea and took it to a truly epic level of scale. If anyone can pull this kind of game off these guys can and I really can’t wait to follow this game from the alpha stages right up to its final release.
For the longest time large media and entertainment companies have been competing against pirates by any way they deem necessary. For games they lavish on restrictive DRM schemes, giving us only limited installs and mandating Internet access before we’re allowed to play. For music, movies and TV shows us Australians seem to be relegated to the backwaters of delayed releases at prices that are cemented in decades old thinking when it actually did cost a lot to ship stuff to us. The pirates then have been offering a service that, put simply, were far more attractive than their legitimate counterparts and this is why it continues to be such a big problem today. A few companies have got the right idea though and surprisingly one of them is our very own Australian Broadcasting Corporation.
For uninitiated ABC has long had a pretty darn good service called iView, an on demand streaming service akin to the BBC’s iPlayer. For PlayStation 3 owners in Australia we’re also lucky enough to have a dedicated link to it on our cross media bar, making it quite painless to use. If you also happen to be on Internode all the traffic to iView is unmetered as well meaning you can stream a good section of the entire ABC back catalogue for nothing. When a couple of my favourite shows were on there (Daily Show, Colbert Report) I used it quite often as I could just browse the list and then hit play, nothing more was required. The service has gone down hill as of late as they don’t keep entire back catalogues up for very long (I think it was about 6 episodes per show, usually for a time after they had aired) but the idea behind it is very solid.
News comes today though that they’re doing some quite extraordinary: putting up episodes of Doctor Who online right after they’re shown in the UK, a week before they’re shown in Australia:
In an Australian first, the new adventures of Amy, Rory and The Doctor will be available on the ABC’s iView player from 5.10am AEST on Sunday September 2, just hours after the first episode airs in the UK.
The show will then reappear in the future, on ABC1at 7:30pm the following Saturday, September 8.
ABC1 controller Brendan Dahill said the decision to air the show online before television was motivated by a desire to reduce piracy, as well as fulfill the needs of drooling Whovians, who have waited almost a year for the new series.
Indeed the biggest complaint that many people had regarding the Doctor Who series was that even if it was available in their region it was often significantly delayed. The Doctor Who fans are a rabid bunch and being out of sync with the greater community is something that many of them couldn’t bear and so turned to pirated solutions. Offering up the episodes at nearly the same time will go a long way to turn those pirating users into viewers that can be monetized in some way, although how that will be given ABC’s lack of commercial interests remains to be seen. The producers of Doctor Who must be in on this however so I’m sure there’s something in it for them.
I think it’s quite commendable that ABC has decided to tackle piracy in this way instead of trying to take more draconian measures, as is the usual route. Whilst it won’t stop pirating entirely it will go a long way to making the ABC’s offering that much more desirable. I’m sure they could up the ante significantly by opening up their entire back catalogue for a nominal fee but I’m not sure what kinds of regulations they’re under, being a government funded initiative and all. I might not be an ongoing customer but I could see myself buying a month here or there when a I got interested in a series they had.
This is the future that media giants should be looking towards. Instead of trying to force the pirates further underground they need to make their offerings better than what they can get elsewhere. iView is a great example of that and they really are only a couple steps away from beating the pirate option in almost every respect. Hopefully this spurs the other commercial stations to do similar and then Australia won’t be the pirate ridden media backwater that it has been for the past couple decades.
It’s no secret that I’ve never been much of a fan of the OnLive service. Whilst my initial scepticism came from my roots as someone who didn’t have decent Internet for the vast majority of his life while everyone else in the world seemed to since then I’ve seen fundamental problems with the service that I felt would severely hamper adoption. Primarily it was the capital heavy nature of the beast, requiring a large number of high end gaming PCs to be always on and available even when there was little demand for them. That and the input lag issue that would have made many games (FPS being the most prominent genre) nearly unplayable, at least in my mind. Still I never truly believed that OnLive would struggle that much as there definitely seemed to be a lot of people eager to use the service.
For once though I may have been right.
OnLive might have been a rather capital intensive idea but it didn’t take long for them to build out a company that was getting valued in the $1 billion range, no small feat by any stretch of the imagination. It was at that point that I started doubting my earlier suspicions, that level of value doesn’t come without some solid financials behind it, but it seems that since that dizzying high (and most likely in a reaction to Sony’s acquisition of their competitor Gaikai for much less than that) that they only had one place to go and that was down:
We’re hearing from a reliable source that OnLive’s founder and CEO Steve Perlman finally decided to make an exit — and in the process, is screwing the employees who helped build the company and brand. The cloud gaming company reportedly had several suitors over the last few years (perhaps including Microsoft) but Perlman reportedly held tight control over the company, apparently not wanting to sell or share any of OnLive’s secret sauce.
Our source tells us that the buyer wants all of OnLive’s assets — the intellectual property, branding, and likely patents — but the plan is to keep the gaming company up and running. However, OnLive management cleaned house today, reportedly firing nearly the entire staff, and we hear it was done just to reduce the company’s liability, thus reducing employee equity to practically zero. Yeah, it’s a massive dick move.
We’ve seen this kind of behaviour before in companies like the ill-fated MySpace and whilst the company will say many things about why they’re doing it essentially it makes the acquisition a lot more attractive for the buyer, due to the lower ongoing costs. Whoever this well funded venture capitalist is they don’t seem to be particularly interested in the company of OnLive itself, more the IP and massive amount of infrastructure that they’ve built up over the course of the last 3 years. No matter how the service is doing financially those things have some intrinsic value behind them and although the new mysterious backer has committed to keeping the service running I’m not sure how much faith can be put in those words.
Granted there are services that were so costly to build that the initial companies who built them folded but the subsequent owner who acquired everything at a fire sale price went onto to make a very profitable service (see Iridium Communications for a real world example of this). However the figures that we’ve been seeing on OnLive’s numbers since this story broke don’t paint a particularly rosy picture for the health of the service. When you have a fleet of 8000 servers servicing at most 1600 users that doesn’t seem sustainable by any way that I can think of lest the users be paying out the nose for the service (which they’re not, unfortunately). It’s possible that the massive amount of lay offs coupled with a reduction in their current infrastructure base might see OnLive become a profitable enterprise once again but I’ll have to say that I’m still sceptical.
Apart from the monthly access fee requirement being dropped none of the issues that I and countless other gamers have highlighted have been addressed and their niche of people who want to play high end games without the cost (and don’t own a console) just isn’t big enough to support their idea. I could see something like this service being an also-ran for a large company, much like Sony is planning to do with Gakai, but as a stand alone enterprise the costs of establishing the require infrastructure to get the required user base are just too high. This is not even touching on the input lag or the ownership/DRM issues either, both of which have been shown to be deal breakers for many gamers contemplating the service.
It’s a bit of a shame really as whilst I love being right about these things I’d much rather be proven wrong, especially when it comes to non-traditional ideas like OnLive. It’s entirely possible that their new benefactor could turn things around for them but they haven’t done a lot to endear themselves to the public and their current employees so their battle is going to be very much up hill from now on. I’m still willing to be proven wrong on this idea though but as time goes on it seems less and less likely that it’ll happen and that’s a terrible thing for my already inflated ego.
I’m something of a collector of failed MMORPGs. Every since my addiction began with World of Warcraft it seemed I was forever doomed to roam the genre in search of that same feeling that World of Warcraft inspired in me. Let’s just say that in my travels I’ve seen nearly everything, from inventive PvP systems to epic grinds that required almost more time than I had invested in World of Warcraft just to reach the end game content. Over time I’ve started to notice the patterns of what causes some MMORPGs to carry on whilst others struggle to keep their users just months after release. The answer is quite simple but it seems some academics might have a different idea.
Take Ramin Shokrizade, a self proclaimed virtual economy expert who’s latest piece takes aim at Star Wars: The Old Republic’s decision to convert their MMORPG into a free to play model in order to try and get people back into the game. Whilst he does make some good points regarding how TOR felt like a massively single player game (as the campaign was arguably the best thing about it, even though it was a lot more fun to do with friends) the main point of his article, that the monetization strategy was the primary cause for failure, is ultimately only a side issue to the bigger issues at hand.
Shokrizade makes the point that the value players generated, judged by looking at auction house prices and the cost of purchasing credits from real money trading sites, decreased rapidly over the first month. He lays the blame for this specific decline at an instance reset exploit that allowed users to generate quite a lot of credits and whilst this might be a factor in the decline his analysis also fails to include the fact that in any new MMORPG in game currency attracts a high premium at the beginning, usually due to the fact that there isn’t that much of it in circulation. Indeed if you tracked the same statistic for other virtual worlds you would see identical declines as the currency generating capacity of the wider player base and the gold farmers increased significantly. This is not a new phenomena as I’ve seen it happen in nearly every MMO that I’ve played to date.
He also makes the mistake of saying that “As combat in SWTOR was balanced for PvE, PvP combat balance was never attainable”. Nearly all MMORPGs tend to focus on one of these two aspects in order to attract players to the game. SW:TOR focused heavily on the PvE aspect as that’s where BioWare’s strengths are and indeed by all accounts they succeeded at doing so. Whilst the PvP wasn’t as balanced in the beginning saying that because of the PvE focus PvP balance was unattainable is laughable as balance is an ongoing process that evolves with the game. Indeed when I left the PvP balance was far better due to the 50 only arenas, more people having better gear and vast improvements in game code to make the world PvP areas much more playable. The items were comparable to their PvE counterparts however they had PvP stats on them which meant for guilds who were tackling high end content on the hardest difficulties they were unfortunately useless as you couldn’t achieve the stats required.
However Shokrizade’s biggest blunder is when he lays blame at SW:TOR’s monetization scheme for its current troubles. He posits that the unlimited model, the one where you pay a monthly fee and get access to the entire game, encourages people to pay through all the content as fast as possible before dropping it for the next game. Now whilst I won’t discount the fact that there were many a hardcore friend of mine who took time off work to reach level 50 in the space of 4 days or so this was by far not the norm with many players taking at least a month to reach max level (I would know this, I was among them). Even then those who did reach max level would usually roll another character straight afterwards to level with the others who were still catching up mostly because the single player lines for each archetype are unique. He then goes on to peddle his ideal solution and then decries that the monetization scheme is the ultimate factor in deciding a MMORPGs success.
This is as far from the truth I’ve seen anyone get. Anyone who’s played MMORPGs knows that there’s one thing and one thing only that decides whether a game in this genre will be successful or not. That thing is the content.
Of all the failed MMORPGs I’ve played over the years the reason that they struggled can always be tracked back to problems with content. Age of Conan is probably the best example I can think of as it promised a large world, shaped by your actions, with content all the way up to a staggering level 80. This would have been all well and good except the fact that once you hit level 50 there wasn’t any content to speak of until level 80. Warhammer Online had the same issue as people quickly tired of the warzones and many servers locked themselves in a stalemate for the end game PvP, leaving them to turn away. Indeed the biggest problem that SW:TOR had was the fact that the end game content was just so gosh darn accessible, meaning that within the first month or two anyone could see the entire game if they were so inclined.
This was the exact reason why so many people decided to leave SW:TOR when they did. My guild mates and I managed to blast through all the end game raids in just under a week once we were all level 50 thanks to the normal level of difficulty which made the encounters quite easy by end game standards. After that point it’s hard to motivate people to redo content they’ve done before especially when the rewards are only incremental upgrades. Then the only thing left is to grind PvP or flash points in order to get better gear and only the hardcore will keep on doing that after a month or so.
So why does Shokrizade believe that monetization, above all else, is the key to MMORPG success? At the risk of stumbling into ad-hominem territory the reason seems pretty obvious: he’s a self proclaimed expert on virtual economies even though his only experience in economics comes from playing EVE Online (and I’m struggling to verify his claims of leading a 5000 strong corporation in there). It’s then prudent to take what he says with a grain of salt as he has a vested interest in saying things like this, even if they don’t gel so well with reality.
MMORPGs are hard things to create and maintain and it’s a testament to companies like Blizzard and BioWare who’ve managed to actually release one and not go bankrupt in the process. Whilst SW:TOR might be struggling to keep people going so are nearly all MMORPGs, even the mighty World of Warcraft is back to 2008 subscription numbers (is their monetization strategy the problem, Shokrizade?) and that shows just how hard it can be to get people coming back time and time again. The one secret though is the content and there is no doubt that Blizzard has mastered that art and for all it’s successes with the campaign missions BioWare unfortunately missed the mark and they’re paying the price for it now.
There’s no denying the success Apple has enjoyed thanks to their major shift in strategy under Steve Jobs’ reign. Before then they were seen as a direct competitor to Microsoft in almost every way: iMacs vs PCs, MacOS vs Windows and at pretty much every turn they were losing the battle save for a few dedicated niches that kept them afloat. That all changed when they got into the consumer electronics space and began bringing the sacred geek technology to the masses in a package that was highly desirable. There was one aspect of their business that suffered immensely because of this however: their enterprise sector.
Keen readers will note that this isn’t the first time I’ve mentioned Apple’s less than stellar support of the enterprise market and nothing has really changed in the 8 months since I wrote that last post. Apple as a company is almost entirely dedicated to the consumer space with token efforts for enterprise integration thrown in to make it look like their products can play well in the enterprise space. Strangely enough it would seem that this token effort is somehow working to convince developers that Apple (well really iOS) is poised to take over the enterprise space:
In the largest survey of its kind, Appcelerator developers were asked what operating system is best positioned to win the enterprise market. Developers said iOS over Android by a 53% to 38% margin. Last year, in its second quarter survey, the two companies were in a dead heat for the enterprise market, tied at 44%.
In a surprise of sorts, Windows showed some life as 33% said they would be interested in developing apps on the Windows 8 tablet.
Now there is value in gauging developer’s sentiment regarding the various platforms, it gives you some insight into which ones they’d probably prefer to develop for, however that doesn’t really serve as an indicator as to what platform will win a particular market. I’d hazard a guess (one that’s based on previous trends) that the same developers will tell you that iOS is the platform to develop for even though it’s quite clear that Android is winning in the consumer space by a very wide margin. I believe there’s the same level of disjunct between what Appcelerator’s developers are saying and what the true reality is.
For starters any of the foothold that iOS has in the enterprise space is not born of any effort that Apple has made and all of it is to do with non-Apple products. For iOS to really make a dent in the enterprise market it will need some significant buy in from its corporate overlords and whilst there’s been some inroads to this (like with the Enterprise Distribution method for iOS applications) I’m just not seeing anything like that from Apple currently. All of their enterprise offerings are simplistic and token lacking many of the features that are required by enterprises today. They may have mindshare and numbers that will help drive people to create integration between iOS products and other enterprise applications but so does Android, meaning that’s really not an advantage at all.
What gets me is the (I’m paraphrasing) “sort of surprise” that developers were looking to Windows 8 for developing applications. Taken in the enterprise context the only real surprise is why there aren’t more developers looking at the platform as if there’s any platform that has chance at dominating this sector it is in fact Windows 8. There’s no doubting the challenges that the platform faces what with Apple dominating the tablet space that Microsoft is only just looking at getting into seriously but the leverage they have for integrating with all their enterprise applications simply can’t be ignored. They may not have the numbers yet but if developer mindshare is the key factor here then Microsoft wins hands down, but that won’t show up in a survey that doesn’t include Windows developers (Appcelerator’s survey is from its users only and currently does not support Windows Phone).
I’ve had my share of experience with iOS/Android integration with various enterprise applications and for what its worth none of them are really up to the same level as native platform applications are. Sure you can get your email and even VPN back in to a full desktop using your smartphone but that’s nothing that hasn’t been done before. The executives might be pushing hard to get their iPads/toy dujour on the enterprise systems but they won’t penetrate much further until those devices can provide some real value to those outside of the executive arena. Currently the only platform that has any chance of doing that well is Microsoft with Android coming in second.
None of this means that Apple/iOS can’t do well in the enterprise space, just that there are other players in this market far better positioned to do so. Should Apple put some focus on the enterprise market it’s quite likely they could capture some market share away from Microsoft and their other partners but their business models have been moving increasingly away from this sector ever since they first release the iPod over a decade ago. Returning to the enterprise world is not something I expect to see from Apple or its products any time soon and no developer sentiment is going to change that.
I’ve long been of the mind that whilst we’re seeing a lot of new businesses being able to fully cloudify their operations, mostly because they have the luxury of designing their processes around these cloud services, established organisations will more than likely never achieve full cloud integration. Whether this is because of data sovereignty issues, lack of trust in the services themselves or simply fear of changing over doesn’t really matter as it’s up to the cloud providers to offer solutions that will ease their customer’s transition onto the cloud platform. From my perspective it seems clear that the best way to approach this is by offering hybrid cloud solutions, ones that can leverage their current investment in infrastructure whilst giving them the flexibility of cloud services. Up until recently there weren’t many companies looking at this approach but that has changed significantly in the past few months.
However there’s been one major player in the cloud game that’s been strangely absent in the hybrid cloud space. I am, of course, referring to Microsoft as whilst they have extensive public cloud offerings in the form of their hosted services as well as Azure they haven’t really been able to offer anything past their usual Hyper-V plus System Centre suite of products. Curiously though Microsoft, and many others it seems, have been running with the definition of a private cloud being just that: highly virtualized environment with dynamic resourcing. I’ll be honest I don’t share that definition at all as realistically that’s just Infrastructure as a Service, a critical part of any cloud service but not a cloud service in its own.
They are however attempting to make inroads to the private cloud area with their latest announcement called the Service Management Portal. When I first read about this it was touted as Microsoft opening the doors to service providers to host their own little Azure cloud but its in fact nothing like that at all. Indeed it just seems to be an extension of their current Software as a Service offerings which is really nothing that couldn’t be achieved before with the current tools available. System Centre Configuration Manager 2012 appears to make this process a heck of a lot easier mind you but with it only being 3 months after its RTM release I can’t say that it’d be in production use at scale anywhere bar Microsoft at this current point in time.
It’s quite possible that they’re trying a different approach to this idea after their ill-failed attempt at trying to get Azure clouds up elsewhere via the Azure Appliance initiative. The problem with that solution was the scale required as the only provider I know of that actually offers the Azure services is Fujitsu and try as you might you won’t be able to sign up for that service without engaging directly with them. That’s incredibly counter-intuitive to the way the cloud should work and so it isn’t surprising that Microsoft has struggled to make any sort of in roads using that strategy.
Microsoft really has a big opportunity here to use their captive market of organisations that are heavily invested in their product as leverage in a private/hybrid cloud strategy. First they’d need to make the Azure platform available as a Server Role on Windows Server 2012. This would then allow the servers to become part of the private computing cloud which could have applications deployed on them. Microsoft could then make their core applications (Exchange, SharePoint, etc.) available as Azure applications, nullifying the need for administrators to do rigorous architecture work in order to deploy the applications. The private cloud can then be leveraged by the developers in order to build the required applications which could, if required, burst out into the public cloud for additional resources. If Microsoft is serious about bringing the cloud to their large customers they’ll have to outgrow the silly notion that SCCM + Hyper-V merits the cloud tag as realistically it’s anything but.
I understand that no one is really doing this sort of thing currently (HP’s cloud gets close, but I’ve yet to hear about anyone who wasn’t a pilot customer seriously look at it) but Microsoft is the kind of company that has the right combination of established infrastructure in organisations, cloud services and technically savy consumer base to make such a solution viable. Until they offer some deployable form of Azure to their end users any product they offer as a private cloud solution will be that only in name. Making Azure deployable though could be a huge boon to their business and could very well form a sort of reformation of the way they do computing.