Posts Tagged‘development’

Bryce Landscapes

Making Things Difficult For Myself (or Game Art is Hard).

I’m no stranger to game development, having dabbled in it when I was at University. It was by far my favourite course as whilst I had a decent amount of programming knowledge translating that into creating something playable seemed like a monumental step that required a whole lot of knowledge I simply did not have. This was long before the time when tools like Unity or GameMaker were considered viable and our lecturer made everything easy for us by providing a simple framework on top of DirectX, allowing us to create simple 2D games without having to learn the notoriously complicated API. Since then I’ve tried my hand at Unity several times over and whilst it seems like a programmer’s dream there’s always one place I come unstuck on: the art.

Bryce LandscapesThis isn’t exactly an unknown issue to me, all my university projects used sprites that were pilfered from various free game resource sites and anything extra was little more than primitive 3D objects whipped up in 3D Studio Max. For my current project however I had the bright idea to try and generate some terrain using one of those fancy bits of software that seem to make good looking landscapes without too much hassle. After wandering through a sea of options I found Bryce seemed to be the one to go for and, better yet, it had the ability to export all the mesh so I could import it directly into Unity without too many hassles. For the $20 asking price I figured it’d be worth it to get me going and hey, should I want to go full procedural down the line it’d be a good introduction into the things I’d need to consider.

Oh how naive I was back then…

Whilst Bryce is everything it claims to be (and the tutorials on using it are really quite good) I just couldn’t seem to get it to create the type of scenery I had in my head. This is entirely a user based problem, one that I’ve suffered with for a long time where the interconnects between my brain and the tools I’m using to create just don’t seem to be able to gel well enough to produce the results I’m looking for. Whilst I was able to generate a decent looking mesh and import it into Unity it was nothing like I wanted it to be and, after sinking a couple hours into it, I decided that it was best left to one side lest I uninstall everything in frustration.

Realistically though the problem was one of expectations where the disjoint between my abilities with a program I had never used before and my expectations of what I’d produce were completely out of alignment. After mulling it over for the past couple days I’ve come to realise that I had set the bar way too high for what I wanted to create and indeed creating such things at this stage of development is actually a distraction from the larger goals I’m trying to achieve. I’ve since settled on just making do with a flat plane for now (as that’s all I’ll really need for the foreseeable future) or, should I really want to put something pretty in there, I’ll just lift a 3D model from a game that’s close enough so I’ve got something to work with.

You’d think after churning through project after project I would’ve become adept at recognising when I was pursuing something that was antithetical to making actual progress but it seems even after so many years I still find myself making things far more difficult than they need to be. What I really need to do is focus on the parts where I can make good progress and, should I make enough in those areas, then look towards doing the parts that are outside my area of expertise. Of course the best solution would be to partner with a 3D artist, but I’d rather wait until I’ve got something substantial working before I try and sell someone else on my idea.

That is unless you’re one and you’ve got nothing better to do with your time ;)

 

OAuth2

OAuth2 and C# Desktop Apps Don’t Seem To Get Along.

I hadn’t been in Visual Studio for a while now, mostly because I had given up on all of my side projects due to the amount of time they soaked up vs my desire to do better game reviews on here, requiring me to spend more time actually playing the games. I had come up with an idea for a game a while back and was really enjoying developing the concept in my head so I figured it would be good to code up a small application to get my head back in the game before I tackled something a little bit more difficult. One particular idea I had was a Soundcloud downloader/library manager as whilst there are other tools to do this job they’re  a little cumbersome and I figured it couldn’t be too difficult to whip it up in a days worth of coding.

How wrong I was.

OAuth2The Soundcloud API has a good amount of documentation about it and from what I could tell I would be able to get my stream using it. However since this wasn’t something that was publicly available I’d have to authenticate to the API first through their OAuth2 interface, something which I had done with other sites so I wasn’t too concerned that it would be a barrier. Of course the big difference between those other projects and this one was the fact that this application was going to be a desktop app and so I figured I was either going to have to do some trickery to get the token or manually step through the process in order to get authenticated.

After having a quick Google around it looked like the OAuth library I had used previously, DotNetOpenAuth, would probably be able to fit the bill and it didn’t take me long to find a couple examples that looked like they’d do the job. Even better I found an article that showed an example of the exact problem I was trying to solve, albeit using a different library to the one I was. Great, I thought, I’ll just marry up the examples and get myself going in no time and after a little bit of working around I was getting what appeared to be an auth token back. Strangely though I couldn’t access any resources using it, either through my application or directly through my browser (as I had been able to do in the past). Busting open Fiddler showed that I was getting 401 (unauthorized) errors back, indicating that the token I was providing wasn’t a viable option.

After digging around and looking at some various resources it appears that whilst the OAuth API might still be online it’s not the preferred way of accessing anything and, as far as I can tell, is mostly deprecated. No worries I’ll just hit up the OAuth2 API instead, figuring that it should be relatively simple to authenticate to it since DotNetOpenAuth now natively supports it. Try as I might to find a working example I just simply could not get it to work with Soundcloud’s API, not even using the sample application that DotNetOpenAuth provides. Trying to search for other, more simplistic examples left me empty handed, especially if I tried to search for a desktop application workflow.

I’m willing to admit that I probably missed something here but honestly the amount of code and complexity that appears to be required to handle the OAuth2 authentication process, even when you’re using a library, seems rather ludicrous. Apparently WinRT has it pretty easy but those are web pages masquerading as applications which are able to take advantage of their auth work flow, something which I was able to make work quite easily in the past. If someone knows of a better library or has an example of the OAuth2 process working with a desktop application in C# then I’d love to see it because I simply couldn’t find out how to do it, at least not after half a day of frustration.

xbox-One

Microsoft Still Playing Catch Up With Policy Updates.

It’s been a rough few months for Microsoft’s gaming division with them copping flak from every angle about nearly all aspects of their next generation console, the Xbox One. I’ve tried to remain mostly neutral on the whole ordeal as I had originally put myself down for both consoles when they both released but that changed when I couldn’t find a compelling reason to get both. Since then Microsoft has tried to win back the gamers it alienated with its initial announcements although it was clear that the damage was done in that respect and all this did was helped to keep the loyalists happy with a choice they were never going to make. Since then it’s been all quiet from Microsoft, perhaps in the hopes that silence would do more to help than anything else they could say at this point.

xbox-One

However a recent announcement from Microsoft has revealed that not only will Microsoft be allowing independent developers to self-publish on the Xbox One platform they’ll also be able to use a retail kit as a debug unit. Considering that traditionally development kits were on the order of a couple of thousand dollars (the PlayStation3 one was probably the most expensive I ever heard of at $20,000 on release day) this announcement is something of a boon for indie developers as those looking to do cross platform releases now don’t have spend a significant chunk of change in order to develop on Microsoft’s console. On the surface that would seem to be a one up on Nintendo and Sony but as it turns out Microsoft isn’t doing something truly notable with this announcement, they’re just playing catch up yet again.

Sony announced at E3 that they’d allow indie developers to self publish on the PlayStation4 however you’ll still need to get your hands on a development kit if you want to test your titles properly. This presents a barrier of course, especially if they retain the astronomical release day price (I wouldn’t expect that though), however Sony has a DevKit Loaner program which provides free development kits to studios who need them. They also have a whole bunch of other benefits for devs signing up to their program which would seem to knock out some of the more significant barriers to entry. I’ll be honest when I first started writing this I didn’t think Sony had any of this so it’s a real surprise that they’ve become this welcoming to indie developers.

Similarly Nintendo has a pretty similar level of offerings for indies although it wasn’t always that way. Updates are done for free and the review process, whilst still mandatory, is apparently a lot faster than other platforms. Additionally if you get into their program (which has requirements that I could probably meet, seriously) you’ll also find yourself with a copy of Unity 4 Pro at no extra charge which allows you to develop titles for multiple platforms simultaneously. Sure this might not be enough to convince a developer to go full tilt on a WiiU exclusive but those considering a multiplatform release after seeing some success on one might give it another look after seeing what Nintendo has to offer.

Probably the real kicker, at least for us Australians, is even despite the fact that indies will be able to self publish on the new platform after testing on retail consoles we still won’t be able to see them thanks to our lack of XBLIG. Microsoft are currently not taking a decisive stand on whether this will change or not (it seems most of the big reveals they want to make will be at Gamescon next month) but the smart money is on no, mostly due to the rather large fees required to get a game classified in Australia. This was supposed to be mitigated somewhat by co-regulation by the industry as part of the R18+ classification reforms and it has, to some extent, although it seems to be aimed at larger enterprises currently as I couldn’t find any fee for service assessors (there was a few jobs up on Seek for some though, weird). Whilst I’m sure that wouldn’t stop Australian indie devs from having a crack at the Xbox One I’m sure it’d be a turn off for some as who doesn’t to see their work in their own country?

I’m getting the feeling that Microsoft has a couple aces up its sleeve for Gamescon so I’ll hold back on beating the already very dead horse and instead say I’m interested to see what they have to say. I don’t think there’s anything at this point that would convince me to get one but I’m still leagues away from writing it off as a dead platform. Right now the ball is in Microsoft’s court and they’ve got a helluva lot of work to do if they want their next gen’s launch day to look as good as Sony’s.

Azure Websites Stats

The Ups and Downs of a Weekend Developing on Azure.

I heap a lot of praise on Windows Azure here, enough for me to start thinking about how that’s making me sound like a Microsoft shill, but honestly I think it’s well deserved. As someone who’s spent the better part of a decade setting up infrastructure for applications to run on and then began developing said applications in its spare time I really do appreciate not having to maintain another set of infrastructure. Couple that with the fact that I’m a full Microsoft stack kind of guy it’s really hard to beat the tight integration between all of the products in the cloud stack, from the development tools to the back end infrastructure. So like many of my weekends recently I spent the previous coding away on the Azure platform and it was filled with some interesting highs and rather devastating lows.

Azure Websites StatsI’ll start off with the good as it was really the highlight of my development weekend. I had promised to work on a site for a long time friend’s upcoming wedding and whilst I had figured out the majority of it I hadn’t gotten around to cleaning it up for a first shot to show off to him. I spent the majority of my time on the project getting the layout right, wrangling JavaScript/jQuery into behaving properly and spending an inordinate amount of time trying to get the HTML to behave the way I wanted it to. Once I had gotten it into an acceptable state I turned my eyes to deploying it and that’s where Azure Web Sites comes into play.

For the uninitiated Azure Web Sites are essentially a cut down version of the Azure Web Role allowing you to run pretty much full scale web apps for a fraction of the cost. Of course this comes with limitations and unless you’re running on at the Reserved tier you’re essentially sharing a server with a bunch of people (I.E. a common multi-tenant scenario). For this site, which isn’t going to receive a lot of traffic, it’s perfect and I wanted to deploy the first run app onto this platform. Like any good admin I simply dove in head first without reading any documentation on the process and to my surprise I was up and running in a matter of minutes. It was pretty much create web site, download publish profile, click Publish in Visual Studio, import profile and wait for the upload to finish.

Deploying a web site on my own infrastructure would be a lot more complicated as I can’t tell you how many times I’ve had to chase down dependency issues or missing libraries that I have installed on my PC but not on the end server. The publishing profile coupled with the smarts in Visual Studio was able to resolve everything (the deployment console shows the whole process, it was actually quite cool to watch) and have it up and running at my chosen URL in about 10 minutes total. It’s very impressive considering this is still considered preview level technology, although I’m more inclined to classify it as a release candidate.

Other Azure users can probably guess what I’m going to write about next. Yep, the horrific storage problems that Azure had for about 24 hours.

I noticed some issues on Friday afternoon when my current migration (yes that one, it’s still going as I write this) started behaving…weird. The migration is in its last throws and I expected the CPU usage to start ramping down as the multitude of threads finished their work and this lined up with what I was seeing. However I noticed the number of records migrated wasn’t climbing up at the rate it was previously (usually indicative of some error happening that I suppressed in order for the migration to run faster) but the logs showed that it was still going, just at a snail’s pace. Figuring it was just the instance dying I reimaged it and then the errors started flooding in.

Essentially I was disconnected from my NOSQL storage so whilst I could browse my migrated database I couldn’t keep pulling records out. This also had the horrible side effect of not allowing me to deploy anything as it would come back with SSL/TLS connection issues. Googling this led to all sorts of random posts as the error is also shared by the libraries that power the WebClient in .NET so it wasn’t until I stumbled across the ZDNet article that I knew I wasn’t in the wrong. Unfortunately you were really up the proverbial creek without a paddle if your Azure application was based on this as the temporary fixes for this issue, either disabling SSL for storage connections or usurping the certificate handler, left your application rather vulnerable to all sorts of nasty attacks. I’m one of the lucky few who could simply do without until it was fixed but it certainly highlighted the issues that can occur with PAAS architectures.

Honestly though that’s the only issue (that’s not been directly my fault) I’ve had with Azure since I started using it at the end of last year and comparing it to other cloud services it doesn’t fair too badly. It has made me think about what contingency strategy I’ll need to implement should any parts of the Azure infrastructure go away for a extended period of time though. For the moment I don’t think I’ll worry too much as I’m not going to be earning any income from the things I build on it but it will definitely be a consideration as I begin to unleash my products onto the world.

 

3 Tips on Improving Azure Table Storage Performance and Reliability.

If you’re a developer like me you’ve likely got a set of expectations about the way you handle data. Most likely they all have their roots in the object-oriented/relational paradigm meaning that you’d expect to be able to get some insight into your data by simply running a few queries against it or simply looking at the table, possibly sorting it to find something out. The day you decide to try out something like Azure Table storage however you’ll find that these tools simply aren’t available to you any more due to the nature of the service. It’s at this point where, if you’re like me, you’ll get a little nervous as your data can end up feeling like something of a black box.

A while back I posted about how I was over-thinking the scalability of my Azure application and how I was about to make the move to Azure SQL. That’s been my task for the past 3 weeks or so and what started out as a relatively simple task of simply moving data from one storage mechanism to another has turned into this herculean task that has seen me dive deeper into both Azure Tables and SQL than I have ever done previously. Along the way I’ve found out a few things that, whilst not changing my mind about the migration away from Azure tables, certainly would have made my life a whole bunch easier had I known about them.

1. If you need to query all the records in an Azure table, do it partition by partition.

The not-so-fun thing about Azure Tables is that unless you’re keeping track of your data in your application there’s no real metrics you can dredge up in order to give you some idea of what you’ve actually got. For me this meant that I had one table that I knew the count of (due to some background processing I do using that table) however there are 2 others which I have absolutely 0 idea about how much data is actually contained in there. Estimates using my development database led me to believe there was an order of magnitude more data in there than I thought there was which in turn led me to the conclusion that using .AsTableServiceQuery() to return the whole table was doomed from the start.

However Azure Tables isn’t too bad at returning an entire partition’s worth of data, even if the records number in the 10s or 100s of thousands. Sure the query time goes up linearly depending on how many records you’ve got (as Azure Tables will only return a max of 1000 records at a time) but if they’re all within the same partition you avoid the troublesome table scan which dramatically affects the performance of the query, sometimes to the point of it getting cancelled which isn’t handled by the default RetryPolicy framework. If you need all the data in the entire table you can then do queries on each partition and then dump them all in a list inside your application and then continue to do your query.

2. Optimize your context for querying or updating/inserting records.

Unbeknownst to me the TableServiceContext class has quite a few configuration options available that will allow you to change the way the context behaves. The vast majority of errors I was experiencing came from my background processor which primarily dealt with reading data without making any modifications to the records. If you have applications where this is the case then it’s best to set the Context.MergeOption to MergeOption.NoTracking as this means the context won’t attempt to track the entities.

If you have multiple threads running or queries that return large amounts of records this can lead to a rather large improvement in performance as the context doesn’t have to track any changes to them and the garbage collector can free up these objects even if you use the context for another query. Of course this means that if you do need to make any changes you’ll have to change the context and then attach to the entity in question but you’re probably doing that already. Or at least you should be.

3. Modify your web.config or app.config file to dramatically improve performance and reliability.

For some unknown reason the default number of HTTP connections that a Windows Azure application can make (although I get the feeling this affects all applications making use of the .NET frameworks) is set to 2. Yes just 2. This then manifests itself as all sorts of crazy errors that don’t make a whole bunch of sense like “the underlying connection was closed” when you try to make more than 2 requests at any one time (which includes queries to Azure Tables). The max number of connections you can specify depends on the size of the instance you’re using but Microsoft has a helpful guide on how to set this and other settings in order to make the most out of it.

Additionally some of the guys at Microsoft have collected a bunch of tips for improving the performance of Azure Tables in various circumstances. I’ve cherry picked out the best ones which I’ve confirmed that have worked wonders for me however there’s a fair few more in there that might be of use to you, especially if you’re looking to get every performance edge you can. Many of them are circumstantial and some require you to plan out or storage architecture in advance (so something that can’t be easily retrofitted into an existing app) but since the others have worked I hazard a guess they would to.

I might not be making use of some of these tips now that my application is going to be SQL and TOPAZ but if I can save anyone the trouble I went through trying to sort through all those esoteric errors I can at least say it was worth it. Some of these tips are just good to know regardless of the platform you’re on (like the default HTTP connection limit) and should be incorporated into your application as soon as its feasible. I’ve yet to get all my data into production yet as its still migrating but I get the feeling I might go on another path of discovery with Azure SQL in the not too distant future and I’ll be sure to share my tips for it then.

Twitter Card Integration Still Working For Me

What’s With This “Start Open, Get Big, Fuck Everyone Off” Thing Startups Are Doing?

One of the peeves I had with the official Twitter client on Windows Phone 7, something I didn’t mention in my review of the platform, was that among the other things that its sub-par at (it really is the poor bastard child of its iOS/Android cousins) it couldn’t display images in-line. In order to actually see any image you have to tap the tweet then the thumbnail in order to get a look at it, which usually loads the entire large image which isn’t required on smaller screens. The official apps on other platforms were quite capable of loading appropriate sized images in the feed which was a far better experience, especially considering it worked for pretty much all of the image sharing services.

Everyone knows there’s no love lost between Instagram and I but that doesn’t mean I don’t follow people who use it. As far back as I can remember their integration in the mobile apps has left something to be desired, especially if you want to view the full sized image which usually redirected you to their atrocious web view. Testing it for this post showed that they’ve vastly improved that experience which is great, especially considering I’m still on Windows Phone 7 which was never able to preview Instagram anyway, but it seems that this improvement may have come as part of a bigger play from Instagram trying to claw back their users from Twitter.

Reports are coming in far that Instagram has disabled their Twitter card integration which stops Twitter from being able to display the images directly in the feed like they have been doing since day 1. Whilst I don’t seem to be experiencing the issue that everyone is reporting (as you can see from the devastatingly cute picture above) there are many people complaining about this and Instagram has stated that disabling this integration is part of their larger strategy to provide a better experience through their platform. Part of that was improve the mobile web experience which I mentioned earlier.

It’s an interesting move because for those of us who’ve been following both Twitter and Instagram for a while the similarities are startling. Twitter has been around for some 6 years and it spent the vast majority of that being a company that was extraordinarily open with its platform, encouraging developers far and wide to come in and develop on their platform. Instagram, whilst not being as wide open as Twitter was, did similar things making their product integrate tightly with Twitter’s ecosystem whilst encouraging others to develop on it. Withdrawing from Twitter in favour of their own platform is akin to what Twitter did to potential client app developers, essentially signalling to everyone that it’s our way or the highway.

The cycle is eerily similar, both companies started out as small time players that had a pretty dedicated fan base (although Instagram grew like a weed in comparison to Twitter’s slow ride to the hockey stick) and then after getting big they start withdrawing all the things that made them great. Arguably much of Instagram’s growth  came from its easy integration with Twitter where many of the early adopters already had large followings and without that I don’t believe they would’ve experienced the massive growth they did. Disabling this functionality seems like they’re shooting themselves in the foot with the intention of attempting some form of monetization eventually (that’s the only reason I can think of for trying to drive users back to the native platform) but I said the same thing about Twitter when they pulled that developer stunt, and they seem to be doing fine.

It probably shouldn’t be surprising that this is what happens when start ups hit the big time because at that point they have to start thinking seriously about where they’re going. For giant sites like Instagram that are still yet to turn a profit from the service they provide it’s inevitable that they’d have to start fundamentally changing the way they do business and this is most likely just the first step in wider sweeping changes. I’m still wondering how Facebook is going to turn a profit from this investment as they’re $1 billion in the hole and there’s no signs of them making that back any time soon.

Windows Azure

Azure Tables: Watch Out For Closed Connections.

Windows Azure Tables are one of those newfangled NoSQL type databases that excels in storing giant swaths of structured data. For what they are they’re quite good as you can store very large amounts of data in there without having to pay through the nose like you would for a traditional SQL server or an Azure instance of SQL. However that advantage comes at a cost: querying the data on anything but the partition key (think of it as a partition of the data within a table) and the row key (the unique identifier within that partition) results in queries that take quite a while to run, especially when compared to its SQL counter parts. There are ways to get around this however no matter how well you structure your data eventually you’ll run up against this limitation and that’s where things start to get interesting.

By default whenever you do a large query against an Azure Table you’ll only get back 1000 records, even if the query will return more. However if your query did have more results than that you’ll be able to access them via a continuation token that you can add to your original query, telling Azure that you want the records past that point. For those of us coding on the native .NET platform we get the lovely benefit of having all of this handled for us directly by simply adding .AsTableServiceQuery() to the end of our LINQ statements (if that’s what you’re using) which will handle the continuation tokens for us. For most applications this is great as it means you don’t have to fiddle around with the rather annoying way of extracting those tokens out of the response headers.

Of course that leads you down the somewhat lazy path of not thinking about the kinds of queries you’re running against your Tables and this can lead to problems down the line. Since Azure is a shared service there are upper limits on how long queries can run and how much data they can return to you. These limits aren’t exactly set in stone and depending on how busy the particular server you’re querying is or the current network utilization at the time your query could either take an incredibly long time to return or could simply end up getting closed off. Anyone who’s developed for Azure in the past will know that this is pretty common, even for the more robust things like Azure SQL, but there’s one thing that I’ve noticed over the past couple weeks that I haven’t seen mentioned anywhere else.

As the above paragraphs might indicate I have a lot of queries that try and grab big chunks of data from Azure Tables and have, of course, coded in RetryPolicies so they’ll keep at it if they should fail. There’s one thing that all the policies in the world won’t protect you from however and that’s connections that are forcibly closed. I’ve had quite a few of these recently and I noticed that they appear to come in waves, rippling through all my threads causing unhandled exceptions and forcing them to restart themselves. I’ve done my best to optimize the queries since then and the errors have mostly subsided but it appears that should one long running query trigger Azure to force the connection closed all connections from that instance to the same Table storage will also be closed.

Depending on how your application is coded this might not be an issue however for mine, where the worker role has about 8 concurrent threads running at any one time all attempting to access the same Table Storage account, it means one long running query that gets terminated triggers a cascade of failures across the rest of threads. For the most part this was avoided by querying directly on row and partition keys however the larger queries had to be broken up using the continuation tokens and then the results concatenated in memory. This introduces another limit on particular queries (as storing large lists in memory isn’t particularly great) which you’ll have to architect your code around. It’s by no means an unsolvable problem however it was one that has forced me to rethink certain parts of my application which will probably need to be on Azure SQL rather than Azure Tables.

Like any cloud platform Azure is a great service which requires you to understand what its various services are good for and what they’re not. I initially set out to use Azure Tables for everything and have since found that it’s simply not appropriate for that, especially if you need to query on parameters that aren’t the row or partition keys. If you have connections being closed on you inexplicably be sure to check for any potentially long running queries on the same role as this post can attest they could very well be the source of what ales you.

Windows 8 Xbox games600px

Windows 8, Games and The Critical Miss of Expecting The Desktop to Disappear.

If you were to believe what some games industry big wigs were saying you’d be lead to believe that Windows 8 was the beginning of the rapture for games on the Microsoft platform. At first it was just a couple developers, big ones in their own right (like Notch), but when someone like Gabe Newell chimes in you start to take notice as distributing games on the Windows platform is his bread and butter and he doesn’t say things like this lightly. However as someone who’s grown up on the Microsoft platform, from the old MS-DOS days until today where I’m running Windows 8 full time on my home PC, and has made his career on their products I still can’t help but feel that their concerns are misplaced as they seem to hinge on a fundamental miscalculation about Microsoft’s overall product strategy.

Those concerns are laid out in lengthy detail by Casey Muratori in his latest instalment of Critical Detail: The Next Twenty Years. In there he lays out the future of the Microsoft platform, drawing on the past few decades of Microsoft’s developments and using them to draw conclusions about what the Microsoft ecosystem will look like in 2032. In this world the future of games on Windows seems grim as all the current AAA titles don’t meet the requirements to be present on the Windows Store and the desktop interface is long gone, effectively destroying the games industry on any PC running their operating system.

It’s a grim future and the number of people worried about this coming to fruition seems to increase on a daily basis. However I believe that some of the assumptions made ignore critical facts that render all this doom and glooming moot, mostly because they ignore Microsoft’s larger strategies.

Before I dive into that however let me just acknowledge that yes the Windows Store doesn’t seem like it would be a great place for current games developers. Realistically it’s no different from Google Play or the iOS App Store as many of the requirements are identical. Indeed all of the platforms strive for the same “family friendly” environment that’s bereft of porn (or anything overtly sexual), violence and excessive profanity which does exclude a good number of games from making their debut on the platform. This hasn’t stopped countless numbers of companies from profiting on this platform but there is no denying that the traditional games industry, with its love of all those things these market places abhor, would struggle with these guidelines.

The fundamental misstep that many games developers appear to be making though is thinking that the Windows Store and the guidelines that come along with it will be the only platform available for them to release games onto the Windows operating system. Looking back to previous examples of Windows does show that Microsoft puts an end date on many technologies however I don’t believe that the desktop will be among them. Sure you might not be able to write a DOS game and have it run in Windows 8 but you can take a MFC app built in 1992 and run today (with the biggest challenge there possibly being recompiling it, but the same code will work).

The reason for the Metro (or Modern or whatever they’re calling it now) interface’s existence is not, as many believe, a direct reaction to the success of the iPad/Android devices and Microsoft’s failure to capitalize on it. The Metro interface, which is built upon the WinRT framework, exists primarily to provide a unified platform where applications can provide a unified experience across the three major screens which users interact with. The capabilities provided within that framework are a fairly comprehensive subset of the larger .NET framework but it’s not fully feature complete as the instruction set needed to be cut down in order for it to be usable on ARM based devices. Whilst it still has access to the goodies required to make games (you can get DirectX on it for example) it’s still not the default platform, is just another one which developers can target.

If the WinRT/Metro framework was Microsoft’s preferred direction for all application development then it wouldn’t be the bastard step-child of their main development technologies, it would become the new .NET. Whilst it is going to be the framework for cross platform applications it’s most definitely not going to be the platform for native development on Windows PCs. The argument can be made that Microsoft wants to transition everyone to WinRT as the default platform but I’ve seen no evidence to support that apart from the idea that because the Metro UI is front and centre that means it’s Microsoft’s main focus.

I find that hard to believe as whilst Metro is great on tablets and smart phones it unfortunately struggles in a mouse and keyboard environment as nearly every review of it has mentioned. Microsoft isn’t stupid, they’ve likely heard much of this feedback through other channels and will be integrating it into their future product strategies. To simply say that they’ll go “Nope, we know we’re going in the right direction and completely killing the desktop” is to be ignorant of the fact that Microsoft works extremely closely with their customers, especially the big organisations who have been the most vocal opponents of Metro-first design. They’re also a pretty big player in the games industry, what with that Xbox being so darn popular, so again I fail to see how they wouldn’t take the feedback on board, especially from such a dedicated audience like us PC gamers.

I’d lend some credence to the theory if the desktop environment hadn’t received much love in Windows 8 in lieu of all the work done on Metro but yet again I find myself coming up empty handed. The UI received a massive overhaul so that the styling would be in line with the rest of Microsoft’s products and there have been numerous improvements in functionality and usability. Why Microsoft would invest so heavily in something that will be slated to be removed within a couple generations of Windows releases is beyond me as most of their deprecated technologies receive no updates for decades prior to them being made obsolete.

And the applications, oh don’t get me started about Microsoft’s own applications.

Whilst Metro has some of the basic applications available in it (like Office and….yeah Office) all of Microsoft’s current catalogue received a revamp as desktop applications, not Metro apps. You’d think that if their future direction was going to be all Metro-esque that more of their staple application suites would have received that treatment, but they didn’t. In fact the amount of applications that are available on the desktop vs the ones available on Metro makes it look more like Metro was the afterthought of the desktop and not the other way around.

If Microsoft’s future is going to be all Windows Store and WinRT apps there’s really no evidence showing to show for it and this is the reason why I don’t feel sympathetic to those developers who are bellyaching about it. Sure if you take a really, really narrow view of the Microsoft ecosystem it looks like the end is nigh for the current utopia of game development that is Windows 7 but in doing so you’re ignoring the wealth of information that will prove you otherwise. The Windows Store might not be your distribution platform of choice (and it likely will never be) but don’t think that the traditional methods that you’ve been using are going anywhere because if Microsoft’s overall strategy is anything to go by they aren’t.

Windows Azure Portal

Building And Deploying My First Windows Azure App.

I talk a big game when it comes to cloud stuff and for quite a while it was just that: talk. I’ve had a lot of experience in enterprise IT with virtualization and the like, basically all the stuff that powers the cloud solutions we’re so familiar with today, but it wasn’t up until recently that I actually took the plunge and actually started using the cloud for what its good for. There were two factors at work here, the first being that cloud services usually require money to use them and I’m already spending enough on web hosting as it is (this has since been sorted by joining BizSpark) but mostly it was time constraints as learning to code for the cloud properly is no small feat.

My first foray into developing stuff for the cloud, specifically Windows Azure, was back sometime last year when I had an idea for a statistics website based around StarCraft 2 replays. After finding out that there was a library for parsing all the data I wanted (it’s PHP but thanks to Phalanger its only a few small modifications away from being .NET)  I thought it would be cool to see things like how your actions per minute changed over time and other stats that aren’t immediately visible through the various other sites that had similar ambitions. With that all in mind I set out to code myself up a web service and I actually got pretty far with it.

However due to the enormous amount of work required to get the site working the way I wanted to work it ultimately ended up falling flat long before I attempted to deploy it. Still I learnt all the valuable lessons of how to structure my data for cloud storage services, the different uses of worker and web roles and of course the introduction into ASP.NET MVC which is arguably the front end of choice for any new cloud application on the Windows Azure framework. I didn’t touch the cloud for a long time after that until just recently when I made the move to all things Windows 8 which comes hand in hand with Visual Studio 2012.

Visual Studio 2010 was a great IDE in its own right the cloud development experience on it wasn’t particularly great, requiring a fair bit of set up in order to get everything right. Visual Studio 2012 on the other hand is built with cloud development in mind and my most recent application, which I’m going to keep in stealth until it’s a bit more mature, was an absolute dream to build in comparison to my StarCraft stats application. The emulators remain largely the same but the SDK and tools available are far better than their previous incarnations. Best of all deploying the application can’t be much simpler.

In order to deploy my application onto the production fabric all I had to do was follow the bouncing ball after right clicking my solution and hitting “Publish”. I had already set up my Azure subscription (which Visual Studio picked up on and downloaded the profile file for me) but I hadn’t configured a single thing otherwise and the wizard did everything that was required to get my application running in the cloud. After that my storage accounts were available as a drop down option in the configuration settings for each of the cloud roles, no messing around with copying keys into service definition files or anything. After a few initial teething issues with a service that didn’t behave as expected when its table storage was empty I had the application up and running without incident and it’s been trucking along well ever since.

I really can’t overstate just how damn easy it was to go from idea to development to production using the full Microsoft suite. For all my other applications I’ve usually had to spend a good few days after I’ve reached a milestone configuring my production environment the same way as I had development and 90% of the time I won’t remember all the changes I made along the way. With Azure it’s pretty much a simple change to 2 settings files (via dropdowns), publishing and then waiting for the application to go live. Using WebDeploy I can also test code changes without the risk of breaking anything as a simple reboot to the instances will roll the code back to its previous version. It’s as fool proof as you can make it.

Now if Microsoft brought this kind of ease of development to traditional applications we’d start to see some real changes in the way developers build applications in the enterprise. Since the technology backing the Azure emulator is nothing more than a layer on top of SQL and general file storage I can’t envisage that wrapping that up to an enterprise level product would be too difficult and then you’d be able to develop real hybrid applications that were completely agnostic of their underlying platform. I won’t harp on about it again as I’ve done that enough already but suffice to say I really think that it needs to happen.

I’m really looking forward to developing more on the cloud as with the experience being so seamless it really reduces the friction I usually get when making something available to the public. I might be apprenhensive to release my application to the public right now but it’s no longer a case of whether it will work properly or not (I know it will since the emulator is pretty darn close to production) it’s now just a question of how many features I want to put in. I’m not denying that the latter could be a killer in its own right, as it has been in the past, but the less things I have to worry about the better and Windows Azure seems like a pretty good platform for alleviating a lot of my concerns.

VMware VIM SDK Gotchas (or Ghost NICs, Why Do You Haunt Me So?).

I always tell people that on the surface VMware’s products are incredibly simple and easy to use and for the most part that’s true. Anyone who’s installed an operating system can easily get a vSphere server up and running in no time at all and have a couple virtual machines up not long after. Of course with any really easy to use product the surface usability comes from an underlying system that’s incredibly complex. Those daring readers who read my last post on modifying ESXi to grant shell access to non-root users got just a taste of how complicated things can be and as you dive deeper and deeper into VMware’s world the more complicated things become.

I had a rather peculiar issue come up with one of the tools that I had developed. This tool wasn’t anything horribly complicated, all it did was change the IP address of some Windows servers and their ESXi hosts whilst switching the network over from the build VLAN to their proper production one. For the most part the tool worked as advertised and never encountered any errors, on its side at least. However people were noticing something strange about the servers that were being configured using my tool, some were coming up with a “Local Area Network 2″ and “vmxnet3 Ethernet Adapter #2″ as their network connection. This was strange as I wasn’t adding in any new network cards anywhere and it wasn’t happening consistently. Frustrated I dove into my code looking for answers.

After a while I figured the only place that the error could be originating from was when I was changing the server over from the build VLAN to the production one. Here’s the code, which I got from performing the same action in the VIClient proxied through Onyx, that I used to make the change:

            NameValueCollection Filter = new NameValueCollection();
            Filter.Add("name", "^" + ServerName);
            VirtualMachine Guest = (VirtualMachine)Client.FindEntityView(typeof(VirtualMachine), null, Filter, null);
            VirtualMachineConfigInfo Info = Guest.Config;
            VirtualDevice NetworkCard = new VirtualDevice();
            int DeviceKey = 4000;
            foreach (VirtualDevice Device in Info.Hardware.Device)
            {
                String Identifier = Device.ToString();
                if (Identifier == "VMware.Vim.VirtualVmxnet3")
                {
                    DeviceKey = Device.Key;
                    NetworkCard = Device;
                    Console.WriteLine("INFO - Device key for network card found, ID: " + DeviceKey);
                }
            }
            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device.Key = DeviceKey;
            Spec.DeviceChange[0].Device.DeviceInfo = new VMware.Vim.Description();
            Spec.DeviceChange[0].Device.DeviceInfo.Label = Card.DeviceInfo.Label;
            Spec.DeviceChange[0].Device.DeviceInfo.Summary = "Build";
            Spec.DeviceChange[0].Device.Backing = new VMware.Vim.VirtualEthernetCardNetworkBackingInfo();
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).UseAutoDetect = false;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).InPassthroughMode = false;
            Spec.DeviceChange[0].Device.Connectable = new VMware.Vim.VirtualDeviceConnectInfo();
            Spec.DeviceChange[0].Device.Connectable.StartConnected = Card.Connectable.StartConnected;
            Spec.DeviceChange[0].Device.Connectable.AllowGuestControl = Card.Connectable.AllowGuestControl;
            Spec.DeviceChange[0].Device.Connectable.Connected = Card.Connectable.Connected;
            Spec.DeviceChange[0].Device.Connectable.Status = Card.Connectable.Status;
            Spec.DeviceChange[0].Device.ControllerKey = NetworkCard.ControllerKey;
            Spec.DeviceChange[0].Device.UnitNumber = NetworkCard.UnitNumber;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).AddressType = Card.AddressType;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).MacAddress = Card.MacAddress;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).WakeOnLanEnabled = Card.WakeOnLanEnabled;
            Guest.ReconfigVM_Task(Spec);

My first inclination was that I was getting the DeviceKey wrong which is why you see me iterating through all the devices to try and find it. After running this tool many times over though it seems that my initial idea of just using 4000 would work since they all had that same device key anyway (thanks to all being built in the same way). Now according to the VMware API documentation on this function nearly all of those parameters you see up there are optional and earlier revisions of the code included only enough to change the DeviceName to Production without the API throwing an error at me. Frustrated I added in all the required parameters only to be greeted by the dreaded #2 NIC upon reboot.

It wasn’t going well for me, I can tell you that.

After digging around in the API documentation for hours and fruitlessly searching the forums for someone who had had the same issue as me I went back to tweaking the code to see what I could come up with. I was basically passing all the information that I could back to it but the problem still persisted with certain virtual machines. It then occurred to me that I could in fact pass the network card back as a parameter and then only change the parts I wanted to. Additionally I found out where to get the current ChangeVersion of the VM’s configuration and when both of these combined I was able to change the network VLAN successfully without generating another NIC. The resultant code is below.

            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.ChangeVersion = Guest.Config.ChangeVersion;
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device = Card;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            Guest.ReconfigVM_Task(Spec);

What gets me about this whole thing is that the VMware API says that all the other parameters are optional when its clear that there’s some unexpected behavior when they’re not supplied. Strange thing is if you check the network cards right after making this change they will appear to be fine, its only after reboot (and only on Windows hosts, I haven’t tested Linux) that these issues occur. Whether this is a fault of VMware, Microsoft or somewhere between the keyboard and chair is an exercise I’ll leave up to the reader but it does feel like there’s an issue with the VIM API. I’ll be bringing this up with our Technical Account Manager at our next meeting and I’ll post an update should I find anything out.