Monthly Archives: November 2011

Just Plain Awesome.

I have a lot of respect for fine engineering. It doesn’t matter what field it comes from either as I find there’s an elegance about things that have been so well planned, designed and the implemented. I recently came across this video that show cases the world’s smallest V12 engine, all hand made (apart from the screws). It’s a bit lengthy but I was mesmerized by it, loving the incredible attention to detail and the beauty of something so complex coming together so well.

YouTube Preview Image

I think it also plays into my not-so-secret love of steampunk stuff, what with all the elaborate metalwork and interconnecting parts.

Have Money, Want Content, Will Spend.

I spent the better part of my youth pirating nearly every bit of software I wanted. It’s not that I was doing it on principle, no it was more that I didn’t have the cash required to fuel my insane desire for the latest computer hardware, software and everything else that I had my eye on back then. Sure you can argue that I should have just gone without instead of pirating but in the end they were never going to get money from me anyway. For those software and games developers that did make a decent product they’ve since received a well paying customer in the form of my current self who spends lavishly on collector’s editions and any software that he needs.

One area I’ve never paid a dime for (although I happily would, as I’ll explain later) is TV shows. I was a pretty big TV watcher as a kid, even going to the point of recording shows that I couldn’t watch in the morning (because I had to catch the bus) so that I could watch them in the afternoon. As I discovered the wonders of playing video on your PC I started to consume most of my media through there as it was just so much more convenient than waiting for a particular show to come on at a certain time. Australia is also quite atrocious for getting new shows as they’re released, usually coming to our shores months after their release to the rest of the world, if they do at all. However whilst I might be able to get everything for free it’s still somewhat of an inconvienence, especially when I see a service like Steam that has no replica in TV in Australia.

It’s not like these services don’t exist either. The USA has things like Netflix and Hulu that stream TV shows to users and the latter will even do so free of charge. From a technical standpoint there’s no reason why these services can’t work anywhere in the world, they’re just another set of packets travelling alongside all the others. However both of those services employ heavy geo-fencing, the process by which anyone connecting to it is identified by region and, should they be outside the USA, be blocked from viewing the content. Primarily this is because of licensing agreements that they have with the content providers who want to control which content goes where. For places like Australia however this just leads to people pirating the content instead of watching it on TV or buying it in stores, something I’m sure they’re not entirely happy about.

This issue came up recently when a bunch of ISPs got together and proposed a new system to deal with copyright infringement. On the surface it looked like long time supporters of privacy were caving under pressure from rights holders but it’s actually anything but. More its an idea to make the discovery process more open and focuses on educating the end users rather than punishing them. Whilst I don’t like the system proposed I did like the fact that they recognised rights holders needed to do a better job of providing content to Australia residents. The fact of the matter is many turn to piracy for the simple reason that they simply can’t get it anywhere else. A service like Hulu in Australia would be wildly popular and would be as good for the rights holders as Steam was for the games industry.

Steam has shown that convenience and service are what drive people to piracy, not strictly price. Of course Steam’s regular fire sales have made sure that people part with more cash than they usually would but the fact is that they deliver a product that’s on the same level of convenience (sometimes better) than the pirates do. Right now rights holders are still delivering products that are less convenient (and sometimes, even worse overall) and so the piracy option is far more attractive. I know this is asking a lot of an industry that’s feared technology for the better part of a century but in the end the problem doesn’t lie with the pirates, it lies with them. 

Rationality Wins: Australia not Entertaining the Anti-Vaccine Movement.

I don’t have kids and probably won’t for another few years but that doesn’t mean I can’t understand some of the things that parents go through. I used to work in child care back in the day and by far the biggest concern any of the parents had was their child’s health. As a care giver every child’s health was my concern as disease has a tendency to spread rapidly in those situations and one sick kid can mean dozens if not taken care of correctly. This, amongst numerous other reasons, is why I fail to understand why some parents refuse to vaccinate their children as otherwise you’re putting them (and other children) at a great risk.

Now I know the reasons why most parents don’t vaccinate their children. Mostly it has to do with their concern that vaccines, in particular the triple shot MMR, will cause their child to develop an Autism Spectrum Disorder. The controversy surrounding this is well known but it is suffice to say that all the evidence and scientific research shows that vaccines can not and do not lead to ASDs. Any correlation that can be drawn between the two is simply that and can not be used as a basis for causation. The fact of the matter is that so far the only proven cause for autism is genetics and any environmental factors are either still under investigation or have been thoroughly disproved. To say otherwise at this point is unscientific conjecture and it would be reckless to base your child’s health decisions on such things.

The usual retort people have for the decision not to vaccinate is that it’s their decision and they should have the choice to make it. At this point the crazed libertarian in me starts shrieking out in support of them and I’d agree with him, right up until I get to the point of where their decisions start to impact others. Whilst the decision not to vaccinate your child is not only a bad decision for them it’s also a bad thing for society at large. Herd immunity requires a certain number of people to be immune to a disease before the non-immune can benefit from their protection. The anti-vaccination movement has had a big enough impact that for certain diseases we’re actually below that critical threshold and those who can’t be made immune, like those who are too young, end up paying the price.

Thankfully I live in Australia a place where the government has finally decided to hit people who refuse to vaccinate their children where it hurts, in their wallet:

Parents who do not have their children fully immunised will be stripped of family tax benefits under a scheme announced by the Federal Government.

The Government says 11 per cent of five-year-olds are not immunised and has announced a shake-up of the system which will take effect from July 1 next year.

Under the changes, families who refuse vaccinations face losing up to $2,100 per child in benefits.

That number of unvaccinated children is rather scary as the herd immunity level for pertussis (whooping cough) and measles is above that vaccination rate. Now this change won’t convince everyone, there are some that to refuse to vaccinate on principle, but hopefully it will drive the numbers up high enough that it won’t matter any more. As it stands now we’re in danger of seeing a resurgence of these diseases that, to put it simply, we shouldn’t have to.

This isn’t one of those ethical grey areas where you can justify your decision based on whatever you believe in, the fact is that if you’re child isn’t vaccinated they are not only at risk themselves but they also put others at risk. The only time I’d support someone not vaccinating their children is if they kept them away from all other children which I think everyone will agree would be far more damaging to them than a shot in the arm. So if the Australian government isn’t going to entertain the anti-vaccination movement neither should you and if you still feel the need to go against the grain because of some whacky view you saw on the Internet then I’m glad you’re getting slugged for it. Maybe then you’ll think twice about the callous decision you’re making.

Fusion-IO ioDrive Maximised IOPS

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably. 

A Guide to Game Reviews on The Refined Geek.

Welcome to review guide for The Refined Geek. If you don’t need the full explanation of why I review the way I do feel free to skip to the TLDR section at the bottom.

It’s come to my attention that whilst I think my method for reviewing games is somewhat transparent it’s really anything but. Although I’ve tried to keep the same format and style over the past couple years I’ve noticed that my initial reviews are really worlds away from my current format. So in the effort of transparency (and hopefully adding some clarity) I thought I’d give you a brief overview of the process I go through in order to bring you game reviews every other week or so.

The first, and of course most enjoyable part, is that I’ll play the game that gets reviewed. Now my preference for games is the PC as that’s the best platform I’ve found for conducting reviews. I’m not adverse to playing on other platforms (as shown by my reviews of games like Assassin’s Creed, Heavy Rain, Red Dead Redemption, etc.) but I prefer it since I can take my own screen shots (all console games have thus far used press kit screen shots) and I’m not taking up the main TV which I have to sometimes fight my wife for. I am working on a solution for reviewing console only titles but I haven’t found a solution that works exactly the way I want it to.

One big point I make is to finish all the games that I play before reviewing them. This can be something of a burden with games that pack on the game time (like Skyrim, for instance) but I feel reviewing a game I didn’t complete doesn’t do it justice. The problem with this of course is that some games will drag out too long and I won’t finish them or, what usually happens, is that the game has nothing redeeming about it and I won’t attempt to finish it. I’ve had several games like this and unfortunately since my time is somewhat limited I don’t like to waste it on titles that just aren’t fun to slog through.

My review scoring system is quite simplistic: all games start off with a perfect 10/10 score and then will lose points for things that detract from the game experience. Think of it like an innocent until proven guilty sort of scenario where I like to err on the side of a game being good rather than it being bad. The issues are mostly subjective but there are also some objective things like game breaking bugs, bad performance or poor game design. The amount I take off is somewhat subjective, usually based around how bad something detracts from the game, and I haven’t bothered itemizing it since if you read my review you can easily tell where the game lost points. Still if there’s interest in seeing a breakdown of why I took points off for certain things I’ll be sure to include it in future reviews.

I’ve had some complaints that my review scores are too high and hopefully the last 2 paragraphs have explained the reasons as to why this might be. The high scores are a combination of my scoring scheme (everything starts out high) and the fact that I simply don’t finish crap games (so everything that gets reviewed meets a certain standard). I’m not one of those 7/10 guys nor are the game developers paying me for their reviews as all games (bar one exception, which I made clear at the time and will always do so) has been bought with my own money and played just like any other retail customer. Whether that makes the final score useful to you or not is left as an exercise for the reader but it is a useful guide to how I feel if you’re not in the mood for reading the entire review.

I try to keep the structure consistent if for the sole reason that it stops me writing an incoherent mess. For the most part I’ll start with some background of the game, genre or developers depending on what kind of history I have with those aspects. Then I’ll usually set the scene for the game to give the following sections some context. Then I’ll usually break it down into specific sections like graphics and performance, game and set design, combat, game mechanics, overall plot summary and finally multiplayer. The multiplayer section is a relatively recent addition to the reviews as I had avoided it previously but since I’ve had many games that are defined by their multiplayer experience I found it necessary to include it.

TLDR:

  • Majority of games are played on PC, mostly so that I can take screen shots (console solution in the works)
  • All reviewed games are played to their end.
  • Games start off with 10/10 and lose points for things that detract from the game.
  • High review scores are a symptom of the previous 2 points (I don’t usually finish bad games).
  • Most games are bought at retail and any reviews where a copy was given for review will be flagged as such
  • Structure is usually consistent, multiplayer included where available.

If you feel like I’m missing something important or you’d like to see a particular aspect included in future reviews feel free to drop me a comment here, send a tweet to @davidklemke or drop me a line at [email protected] 

Why Macs and Enterprise Computing Don’t Mix.

I’m a big fan of technology that makes users happy. As an administrator anything that keeps users satisfied and working productively means more time for me to make the environment even better for them. It’s a great positive feedback loop that builds on itself continually, leading to an environment that’s stable, cutting edge and just plain fun to use and administer. Of course the picture I’ve just painted is something of an IT administrator nirvana, a great dream that is rarely achieved even by those who have unlimited freedom with the budgets to match. That doesn’t mean we shouldn’t try to achieve it however and I’ll be damned if I haven’t tried at every place I’ve ever worked at.

The one thing that always come up is “Why don’t we use Macs in the office? They’re so easy to use!”. Indeed my two month long soiree into the world of OSX and all things Mac showed that it was indeed an easy operating system to pick up and I could easily see why so many people use it as their home operating system. Hell at my current work place I can count several long time IT geeks who’ve switched their entire household over to solely Apple gear because it just works and as anyone who works in IT will tell you the last thing you want to be doing at  home is fixing up PCs.

You’d then think that Macs would be quite prevalent in the modern workspace, what with their ease of use and popularity amongst the unwashed masses of users. Whilst their usage in the enterprise is growing considerably they’re still hovering just under 3% market share, or about the same amount of market share that Windows Phone 7 has in the smart phone space. That seems pretty low but it’s in line with world PC figures with Apple being somewhere in the realms of 5% or so. Still there’s a discrepancy there so the question still remains as to why Macs aren’t seen more often in the work place.

The answer is simple, Apple simply doesn’t care about the enterprise space.

I had my first experience with Apple’s enterprise offerings very early on in my career, way back when I used to work for the National Archives of Australia. As part of the Digital Preservation Project we had a small data centre that housed 2 similar yet completely different systems. They were designed in such a way that should a catastrophic virus wipe out the entire data store on one the replica on the other should be unaffected since it was built from completely different software and hardware. One of these systems utilized a few shelves of Apple’s Xserve RAID Array storage. In essence they were just a big lump of direct attached storage and for that purpose they worked quite well. That was until we tried to do anything with it.

Initially I just wanted to provision some of the storage that wasn’t being used. Whilst I was able to do some of the required actions through the web UI the unfortunate problem was that the advanced features required installing the Xserve tools on a Mac computer. Said computer also had to have a fibre channel card installed, something of a rarity to find in a desktop PC. It didn’t stop there either, we also tried to get Xsan installed (so it would be, you know, an actual SAN) only to find out that we’d need to buy yet more Apple hardware in order to be able to use it. I left long before I got too far down that rabbit hole and haven’t really touched Apple enterprise gear since.

You could write that off as a bad experience but Apple has continued to show that the enterprise market is simply not their concern. No less than 2 years after I last touched a Xserve RAID Array did Apple up and cancel production of them, instead offering up a rebadged solution from Promise. 2 years after that Apple then discontinued production of its Xserve servers and lined up their Mac Pros as a replacement. As any administrator will tell you the replacements are anything but and since most of their enterprise software hasn’t recieved a proper update in years (Xsan’s last major release was over 3 years ago) no one can say that Apple has the enterprise in mind.

It’s not just their enterprise level gear that’s failing in corporate environments. Whilst OSX is easy to use it’s an absolute nightmare to administer on anything larger than a dozen or so PCs as all of the management tools available don’t support it. Whilst they do integrate with Active Directory there’s a couple limitations that don’t exist for Windows PCs on the same infrastructure. There’s also the fact that OSX can’t be virtualized unless it runs on Apple hardware which kills it off as a virtualization candidate. You might think that’s a small nuisance but it means that you can’t do a virtual desktop solution using OSX (since you can’t buy the hardware at scale to make it worthwhile) and you can’t utilize any of your current investment in virtual infrastructure to run additional OSX servers.

If you still have any doubts that Apple is primarily a hardware company then I’m not sure what planet you’re on.

For what its worth Apple hasn’t been harmed by ignoring the enterprise as it’s consumer electronics business has more than made up for the losses that they’ve incurred. Still I often find users complaining about how their work computers can’t be more like their Macs at home, ignorant of the fact that Apple’s in the enterprise would be an absolutely atrocious experience. Indeed it’s looking to get worse as Apple looks to iPhoneizing their entire product range including, unfortunately, OSX. I doubt Apple will ever change direction on this which is a real shame as OSX is the only serious competitor to Micrsoft’s Windows.

So Long Flash and Thanks for all the Vids.

You’d be forgiven for thinking that I was some kind of shill for Adobe what with all the pro-Flash articles I’ve posted in the past. Sure I’ve taken their side consistently but that’s not because of some kind of fanboy lust for Adobe or some deep rooted hatred for Apple. More it was because the alternatives, HTML5 with CSS3 and JavaScript, are still quite immature in terms of tooling, end user experience and cross platform consistency. Flash on the other hand is quite mature in all respects and, whilst I do believe that the HTML5 path is the eventual future for the web, it will remain as a dominant part of the web for a while to come even if it’s just for online video.

Adobe had also been quite stalwart in their support for Flash too, refusing to back down on their stance that they were “the way” to do rich content on the Internet. Word came recently however that they were stopping development on the mobile version of Flash:

Graphics software giant Adobe announced plans for layoffs yesterday ahead of a major restructuring. The company intends to cut approximately 750 members of its workforce and said that it would refocus its digital media business. It wasn’t immediately obvious how this streamlining effort would impact Adobe’s product line, but a report that was published late last night indicates that the company will gut its mobile Flash player strategy.

Adobe is reportedly going to stop developing new mobile ports of its Flash player browser plugin. Instead, the company’s mobile Flash development efforts will focus on AIR and tools for deploying Flash content as native applications. The move marks a significant change in direction for Adobe, which previously sought to deliver uniform support for Flash across desktop and mobile browsers.

Now the mobile version of Flash had always been something of a bastard child, originally featuring a much more cut down feature set than its fully fledged cousin. More recent versions brought them closer together but the experience was never quite as good especially with the lack of PC level grunt on mobile devices. Adobe’s mobile strategy now is focused on making Adobe AIR applications run natively on all major smart phone platforms, giving Flash developers a future when it comes to building mobile applications. It’s an interesting gamble, one that signals a fundamental shift in the way Adobe views the web.

Arguably the writing has been on the wall for this decision for quite some time. Back at the start of this year Adobe released Wallaby, a framework that allows advertisement developers the capability to convert Flash ads into HTML5. Indeed even back then I said that Wallaby was the first signal that Adobe thought HTML5 was the way of the future and were going to start transitioning towards it as their platform of the future. I made the point then that whilst Flash might eventually disappear Adobe wouldn’t as they have a history for developing some of the best tools for non-technical users to create content for the web. Indeed there are already prototypes of such tools already available so it’s clear that Adobe is looking towards a HTML5 future.

The one place that Flash still dominates, without any clear competitors, is in online video. Their share of the market is somewhere around 75% (that’s from back in February so I’d hazard a guess that its lower now) with the decline being driven from mobile devices that lack support for Flash video. HTML5’s alternative is unfortunately still up in the air as the standards body struggles to find an implementation that can be open, unencumbered by patents and yet still be able to support things like Digital Rights Management. It’s this lack of standardization that will see Flash around for a good while yet as until there’s an agreed upon standard that meets all those criteria Flash will remain as the default choice for online video.

So it looks like the war that I initially believed that Adobe would win has instead seen Adobe pursuing a HTML5 future. Its probably for the best as they will then be providing some of the best tools in the market whilst still supporting open standards, something that’s to the benefit of all users of the Internet. Hopefully that will also mean better performing web sites as well as Flash had a nasty reputation for bringing even some of the most powerful PCs to their knees with poorly coded Flash ads. The next few years will be crucial to Adobe’s long term prospects but I’m sure they have the ability to make it through to the other end.

Guest Post: The Powerless Hero.

Today’s post comes to you courtesy of one of my long time friends and former blogger, David Wright. I’ve always been a fan of his writing for it’s impassioned, no holds barred style that reflects his real world self to a tee. Below he tackles a game I myself reviewed just a week ago, Call of Duty: Modern Warfare 3. That’s all the introduction I’ll give as the article stands on its own and makes for some damn good reading.

Enter Dave W…

I got into an argument when MW3 came out with my friend, most notable Dave K who is generously posting this. I made the claim that knowing the trajectory of the series and the background for the companies who were designing and building the game precluded it of ever having a chance of it being any good. Dave countered with the simple fact I had not played the game. Fair call.

I decided to play through the whole single player and draw judgement then as from what I could tell the single player clocked in around the 4 hour mark if you had a pulse. I… ahem… acquired a copy and sat down on a Friday night resolute that I was going in with a clear mind, determined to just experience the game for what it was.

Brief backstory on the development behind the game because as much as I would try, I had made my original argument based on the history of the game franchise so I will try to quickly sum up. Call of Duty was a massive game franchise which is owned by Activision. It has been traded around their stable of devs teams for a while until the original Call of Duty: Modern Warfare was made by the company Infinity Ward. It was huge, suddenly Activision had another massive hit on their hands and in traditional Activision style they had to have a Call of Duty release each year. So Treyarch becomes the B team for Modern Warfare. They trade back and forth then at the end of MW2 some shenanigans go down over at Activision. The heads of Infinity Ward get called into a meeting with Activision while some heavies go and lock down their offices. They are fired. Before most people heard about this I assumed John Riccitiello, the head of EA, had a blank check and stupid grin on his face ready to go.

So the heads of IW leave to form a company with EA and start poaching everyone they want from IW. Now I am not saying what was left was the dregs but:

YouTube Preview Image

They stripped that company bare.

Every design lead? Almost half of the design team, primary art lead, both animation leads and pretty much the entire writing staff. Things did not look good for Infinity Ward. Add in the fact they have Activisions “You WILL ship a game on this deadline even though company morale does not exist.” Not enough staff? Fine, they bring in Sledgehammer games to help out which is a company that had not shipped a game yet. Through all this you have the rivalry of Treyarch always having to play second fiddle and now almost drooling, hoping IW slips up so Treyarch can take the MW crown.

With all this in the back of my mind I went in to Modern Warfare almost grimly determined to give the game a fair shake.

Keep in mind I have not finished either MW1 nor 2. So as to the story I had a shaky grasp of what was going on but did not know the finer points. Well neither did MW3 apparently. Thrown in to the start of the game and we are chasing after an Evil (with a capital E) Russian man who seems to completely elude us. There is a massive war raging through the United States and you get to flit between multiple characters to globe trot.

As the game starts out I am amazed at how good the engine looks. Everything looks busy and real, the battlefields feel alive and frantic. You cannot go more than a few check points before a building is hit by rockets and smashes to the road or tanks come out of nowhere and start messing with your day. There is always something going on.

Add in the fact the gun play and control is seriously second to none. Every shooter I have ever played gets held up to the gold standard of Counter Strike. I honestly put MW3’s control right up there easily level with CS. Everything feels responsive, there is the unconscious snap, the perfect recoil that you learn to counter act and it all feels right.

Okay, but what about the game itself? Well it seems to be full of tough men doing tough jobs. I know this because I get to see them work all the time. My character? They barely trust me to not kill myself. It was a few levels in and something had been niggling at the back of my mind since the beginning. I couldn’t put my finger on it until we fought our way to the top of a building to blow up a jamming radar tower. Fair enough. We get to the top and sure enough there is the tower. I run up to it and, nothing. I have no known way of destroying it. I shoot at it, nothing. Another enemy jumps out at me and I nail him in the head.

“Quick lay the charges!” shouts my mentally challenged leader.

Charges? I don’t have… I do now. My weapon has disappeared and I am holding a package of explosives. The glowing silhouette of the explosives is helpfully put on the tower. I run up and press my use key and I place them, now I have a detonator in my hand.

“Blow it!” Delivers the same deranged man.

Really? What happens if I don’t? Do I get a say in this? Do I get to actually participate in this game? I wander around the roof top for a while with the detonator in my hand while my squad leader repeats his two lines about blowing it up over and over. Finally putting the poor man out of his misery, thinking he should really lie down for a nap somewhere, I blow it up. The next scene triggers. Men come running out on the roof top next to us. A never ending stream of guys and I now magically have access to a guided missile system.

Does it matter if I kill them? Does the game care? Like a petulant child I go sit in the corner and ignore the strained cries from the men. An enemy helicopter shows up and I am told to blow it up. This advances to the next section and off we go.

I started to notice it then. That feeling I had since the beginning. I had nothing to do with this game. Everyone else was amazing; I was there to clean up after them. Everywhere you go you have Price or Soap or any of the other interchangeable burly men with, literally, the word “Follow” hovering above their heads in case you forget how to walk. It just got worse as the game went on.

A short section later I was following, Price I believe, sneaking along with silenced guns. A guard crosses our path so I nail him in the head “BHWWWAAMMMMmmmm Game over!” Shit, okay maybe I have to let him walk past without noticing us? I reload the game, same guard walks out I don’t shoot him, then Price steps up and runs through a takedown animation killing the guy. Ohh I was not supposed to kill him because only the NPC’s get to do any of the real work.

Another section as Price and I were sneaking into a castle for reasons mostly unknown to me and I was literally told when to walk, crawl, stop, sneak, stop again, take that guy out. This was around half way through the game and it was not so much holding my hand as keeping me on a choker leash, watching me like I was about to make a mess on the carpet.

The game has some amazing sections, running along and the world is falling apart around you. Speeding along in an inflatable boat escaping from a Russian submarine where you just launched all its missiles on their own fleet. But it never lets you actually DO anything. The game is just a point to point exercise where you are constantly being funnelled, herded and yelled at down the path being dictated to you as you get to watch everyone else do all the fun stuff.

With all this you have the, well it’s not bad story telling so much as mindless and incoherent. Why did we end up in this mine? Why is the Russians president’s daughter here? Wait, who am I playing as this time? There is a pathetic attempt at the old double cross where one of the main characters dies (I think he dies, maybe I just wished him dead) he informs the other grizzly hero, Price, that Yuri (the useless git you happen to be controlling at that point) knew the big Evil guy. Gasp! Price punches you out, which again as soon as it starts you lose all character control. Yuri then goes on to tell a one minute story.

“Ohh the Evil guy that everyone hates and wants to kill? Yeah I totally used to work for him! I didn’t tell you because I have a strange and debilitating disease that you also share because you are going to believe everything I am now saying.”

I am not sure if I should feel insulted that someone somewhere got paid to write this or that I am supposed to find this interesting.

As for people dying, as if sensing that it was as bad as it really was, MW3 tries for some poignant moments or at least moments where you are supposed to feel something. However each of these interactions seem to be written by an alien who was sent to study human culture and learnt about emotions from reading scripts of rejected 70’s cop drama’s.

How do they try to make you feel shock and fear at a terrorist gas attack? By putting you in control of a white human male filming his adorably white small girl child and his equally adorably white wife on their apparently fun trip to the city. While filming, the wife and child run up to a truck and turn and wave, calling for you to come closer. The game will just keep calling at you, it will not let you do anything else, there is nothing else for you to do but walk forward triggering the explosion and the lingering glimpse of your family being killed. It is so ham fisted and terrible, the game has not earned the gravitas, it has not earned the tone. It just feels pathetic.

So as the story line lurches to its conclusion and you are put into body after body with the word “Follow” forever floating in your sights, I pushed on. I got to see Price stealth kill people left and right, I got to see Soap take out entire battalions and dictate my every move. Near the end, during one of the briefings I learnt that, yes, YES! I was going to play as one of the big guys. Me! They were letting the beast off the chain; I could do what I wanted. I spawned, eager to lead my squad to glorious victory or hellish defeat, either way it was finally my call.

Instead, there stood Yuri, the ball less wonder who I always had to play as, always forced to follow along like a chastised puppy. I swear the git was grinning, what was that I could see?

A glowing sign floating above his head?

“Follow”

VMware CPU over commit ready times 18 cores on 12 core host

Virtual Machine CPU Over-provisioning: Results From The Real World.

Back when virtualization was just starting to make headway into the corporate IT market the main aim of the game was consolidation. Vast quantities of CPU, memory and disk resources were being squandered as servers sat idle for the vast majority of their lives, barely ever using the capacity that was assigned to them. Virtualization allowed IT shops the ability to run many low resource servers on the one box, significantly reducing the hardware requirement cost whilst providing a whole host of other features. It followed then that administrators looked towards over-provisioning their hosts, I.E. creating more virtual machines than the host was technically capable of handling.

The reason this works is because of a feature of virtualization platforms called scheduling. In essence when you put a virtual machine on an over-provisioned host it will not be guaranteed to get resources when it needs them, instead it’s scheduled on and in order to keep it and all the other virtual machines running properly. Surprisingly this works quite well as for the most part virtual machines spend a good part of their life idle and the virtualization platform uses this information to schedule busy machines ahead of idle ones. Recently I was approached to find out what the limits were of a new piece of hardware that we had procured and I’ve discovered some rather interesting results.

The piece of kit in question is a Dell M610x blade server with the accompanying chassis and interconnects. The specifications we got were pretty good being a dual processor arrangement (2 x Intel Xeon X5660) with 96GB of memory. What we were trying to find out was what kind of guidelines should we have around how many virtual machines could comfortably run on such hardware before performance started to degrade. There was no such testing done with previous hardware so I was working in the dark on this one, so I’ve devised my own test methodology in order to figure out the upper limits of over-provisioning in a virtual world.

The primary performance bottleneck for any virtual environment is the disk subsystem. You can have the fastest CPUs and oodles of RAM and still get torn down by slow disk. However most virtual hosts will use some form of shared storage so testing that is out of the equation. The two primary resources we’re left with then are CPU and memory and the latter is already a well known problem space. However I wasn’t able to find any good articles on CPU over-provisioning so I devised some simple tests to see how the systems would perform when under a load that was well above its capabilities.

The first test was a simple baseline, since the server has 12 available physical cores (HyperThreading might say you get another core, but that’s a pipe dream) I created 12 virtual machines each with a single core. I then fully loaded the CPUs to max capacity. Shown below is a stacked graph of each virtual machine’s ready time which is a representation of how long the virtual machine was ready¹ to execute some instruction but was not able to get scheduled onto the CPU.

The initial part of this graph shows the machines all at idle. Now you’d think at that stage that their ready times would be zero since there’s no load on the server. However since VMware’s hypervisor knows when a virtual machine is idle it won’t schedule it on as often as the idle loops are simply wasted CPU cycles. The jumpy period after that is when I was starting up a couple virtual machines at a time and as you can see those virtual machine’s ready times drop to 0. The very last part of the graph shows the ready time rocketing down to nothing for all the virtual machines with the top grey part of the graph being the ready time of the hypervisor itself. 

This test doesn’t show anything revolutionary as this is pretty much the expected behaviour of a virtualized system. It does however provide us with a solid baseline from which we can draw some conclusions from further tests. The next test I performed was to see what would happen when I doubled the work load on the server, increasing the virtual core count from 12 to a whopping 24. 

For comparison’s sake the first graph’s peak is equivalent to the first peak of the second graph. What this shows is that when the CPU is oversubscribed by 100% the CPU wait times rocket through the roof with the virtual machines waiting up to 10 seconds in some cases to get scheduled back onto the CPU. The average was somewhere around half a second which for most applications is an unacceptable amount of time. Just imagine trying to use your desktop and having it freeze for half a second every 20 seconds or so, you’d say it was unusable. Taking this into consideration we now know that there must be some level of happy medium in the centre. The next test then aimed right bang in the middle of these two extremes, putting 18 CPUs on a 12 core host.

Here’s where it gets interesting. The graph depicts the same test running over the entire time but as you can see there are very distinct sections depicting what I call different modes of operation. The lower end of the graph shows a time when the scheduler is hitting bang on its scheduling and the wait times are overall quite low. The second is when the scheduler gives much more priority to the virtual machines that are thrashing their cores and the machines that aren’t doing anything get pushed to the side. However in both instances the 18 cores running are able to get the serviced in a maximum of 20 milliseconds or so, well within the acceptable range of most programs and user experience guidelines.

Taking this all into consideration it’s then reasonable to say that the maximum you can oversubscribe a virtual host in regards to CPU is 1.5 times the number of physical cores. You can extrapolate that further by taking into consideration the average load and if it’s below 100% constantly then you can divide the number of CPUs by that percentage. For example if the average load of these virtual machines was 50% then theoretically you could support 36 single core virtual machines on this particular host. Of course once you get into the very high CPU count things like overhead start to come into consideration, but as a hard and fast rule it works quite well.

If I’m honest I was quite surprised with these results as I thought once I put a single extra thrashing virtual machine on the server it’d fall over in a screaming heap with the additional load. It seems though that VMware’s scheduler is smart enough to be able to service a load much higher than what the server should be capable of without affecting the other virtual machines that adversely. This is especially good news for virtual desktop deployments as typically the limiting factor there was the number of CPU cores available. If you’re an administrator of a virtual deployment I hope you found this informative and it will help you when planning future virtual deployments.

¹CPU ready time was chosen as the metric as it most aptly showcases a server’s ability to serve a virtual machine’s request of the CPU when in a heavy scheduling scenario. Usage wouldn’t be an accurate metric to use since for all these tests the blade was 100% utilized no matter the number of virtual machines running.

Technorati Post.

T8KG97JV4JHN