Posts Tagged‘code’

EB Games Trade In Banner

Is The Second Hand Market Really That Detrimental?

I’m not a big user of the second hand market but there have been times when I’ve delved into it in order to get what I want. Usually its when I find out about a particular collector’s edition too late to buy a retail copy and will just wait it out until someone wants to hock their copy on eBay where I’ll snap it up for a song. The last game I did this with was Uncharted 3 (although I failed to mention the saga in the review) and whilst I didn’t get all the collector’s edition downloadable goodies the seller went out of their way to make sure I got a similar amount of value as they did when they purchased it new. I certainly didn’t expect this but it was deeply appreciated all the same.

EB Games Trade In Banner

However his generosity is a symptom of the larger problem at play here. Almost 2 years ago a silent war began between developers (well mostly likely the publishers) and the second hand market where first sale doctrine was being usurped by crippling used games. The first title that I purchased which was affected by this Mass Effect 2 and whilst I have no intention of ever selling that game the fact that it was crippled after initial sale didn’t sit particularly well with me. The trend has been on the increase as of late with many games including some form of one time use DLC in order to make second hand titles less attractive.

It gets even worse when rumours start surfacing that the next generation consoles will start supporting features that cripple second hand games natively removing the requirement from game developers to implement their own system. The justification would probably be something along the lines of “this is what we’ve done for ages on the PC” which is kind of true if you count CD keys but they were usually transferable. There’s also the sticky issue of digital downloads which currently have no method on any platform for enabling resale which is why many publishers are beginning to favour those platforms instead of physical retail releases.

The golden days of unsellable digital titles (and by extension crippled second hand titles) may not be long for this world however as the German consumer protection group VZBV has started legal proceedings against Valve in regards to the Steam platform. This isn’t the first time they’ve gone up against them but recent rulings in the EU have set up some precedents which could lead to digital distribution platforms having to implement some kind of second hand market. Considering Steam has been dealing in digital trade for many years now it’s not like they’re incapable of delivering such functionality, they just simply haven’t had the incentive to do so. Heavy fines from the EU could be the push they need in order to get them moving in the right direction but we’ll have to wait until the court case resolves before we’ll see any real movement on this issue.

I have real trouble seeing how the second hand game market is such a detriment to publishers. Indeed many people use trade-ins in order to fund new game purchases and removing that will put a downward pressure on new sales, to the tune of 10% or so. Now I don’t know how much revenue that publishers are making off those second hand uncrippling schemes but I’m sure a 10% increase is above that, especially if you count the amount of good will generated from not being a dick about the used market. Valve would be heralded as the second coming if they enabled used game trading on Steam, even if they charged a nominal fee to facilitate the transaction.

Really I can’t see any downsides to supporting the second hard market and actively working against it doesn’t do the publishers any favours. I’m not saying they have to go out and actively help facilitate it but they could simply not try to work against it like they’re doing right now. Digital distributors do have to pick up their game in this regard however and I hope it doesn’t come down to strong arming them with the law. Should the EU ruling hold up however that’s could very well be what happens but it would at least be a positive result for us consumers.

VMware VIM SDK Gotchas (or Ghost NICs, Why Do You Haunt Me So?).

I always tell people that on the surface VMware’s products are incredibly simple and easy to use and for the most part that’s true. Anyone who’s installed an operating system can easily get a vSphere server up and running in no time at all and have a couple virtual machines up not long after. Of course with any really easy to use product the surface usability comes from an underlying system that’s incredibly complex. Those daring readers who read my last post on modifying ESXi to grant shell access to non-root users got just a taste of how complicated things can be and as you dive deeper and deeper into VMware’s world the more complicated things become.

I had a rather peculiar issue come up with one of the tools that I had developed. This tool wasn’t anything horribly complicated, all it did was change the IP address of some Windows servers and their ESXi hosts whilst switching the network over from the build VLAN to their proper production one. For the most part the tool worked as advertised and never encountered any errors, on its side at least. However people were noticing something strange about the servers that were being configured using my tool, some were coming up with a “Local Area Network 2″ and “vmxnet3 Ethernet Adapter #2″ as their network connection. This was strange as I wasn’t adding in any new network cards anywhere and it wasn’t happening consistently. Frustrated I dove into my code looking for answers.

After a while I figured the only place that the error could be originating from was when I was changing the server over from the build VLAN to the production one. Here’s the code, which I got from performing the same action in the VIClient proxied through Onyx, that I used to make the change:

            NameValueCollection Filter = new NameValueCollection();
            Filter.Add("name", "^" + ServerName);
            VirtualMachine Guest = (VirtualMachine)Client.FindEntityView(typeof(VirtualMachine), null, Filter, null);
            VirtualMachineConfigInfo Info = Guest.Config;
            VirtualDevice NetworkCard = new VirtualDevice();
            int DeviceKey = 4000;
            foreach (VirtualDevice Device in Info.Hardware.Device)
            {
                String Identifier = Device.ToString();
                if (Identifier == "VMware.Vim.VirtualVmxnet3")
                {
                    DeviceKey = Device.Key;
                    NetworkCard = Device;
                    Console.WriteLine("INFO - Device key for network card found, ID: " + DeviceKey);
                }
            }
            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device.Key = DeviceKey;
            Spec.DeviceChange[0].Device.DeviceInfo = new VMware.Vim.Description();
            Spec.DeviceChange[0].Device.DeviceInfo.Label = Card.DeviceInfo.Label;
            Spec.DeviceChange[0].Device.DeviceInfo.Summary = "Build";
            Spec.DeviceChange[0].Device.Backing = new VMware.Vim.VirtualEthernetCardNetworkBackingInfo();
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).UseAutoDetect = false;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).InPassthroughMode = false;
            Spec.DeviceChange[0].Device.Connectable = new VMware.Vim.VirtualDeviceConnectInfo();
            Spec.DeviceChange[0].Device.Connectable.StartConnected = Card.Connectable.StartConnected;
            Spec.DeviceChange[0].Device.Connectable.AllowGuestControl = Card.Connectable.AllowGuestControl;
            Spec.DeviceChange[0].Device.Connectable.Connected = Card.Connectable.Connected;
            Spec.DeviceChange[0].Device.Connectable.Status = Card.Connectable.Status;
            Spec.DeviceChange[0].Device.ControllerKey = NetworkCard.ControllerKey;
            Spec.DeviceChange[0].Device.UnitNumber = NetworkCard.UnitNumber;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).AddressType = Card.AddressType;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).MacAddress = Card.MacAddress;
            ((VirtualVmxnet3)Spec.DeviceChange[0].Device).WakeOnLanEnabled = Card.WakeOnLanEnabled;
            Guest.ReconfigVM_Task(Spec);

My first inclination was that I was getting the DeviceKey wrong which is why you see me iterating through all the devices to try and find it. After running this tool many times over though it seems that my initial idea of just using 4000 would work since they all had that same device key anyway (thanks to all being built in the same way). Now according to the VMware API documentation on this function nearly all of those parameters you see up there are optional and earlier revisions of the code included only enough to change the DeviceName to Production without the API throwing an error at me. Frustrated I added in all the required parameters only to be greeted by the dreaded #2 NIC upon reboot.

It wasn’t going well for me, I can tell you that.

After digging around in the API documentation for hours and fruitlessly searching the forums for someone who had had the same issue as me I went back to tweaking the code to see what I could come up with. I was basically passing all the information that I could back to it but the problem still persisted with certain virtual machines. It then occurred to me that I could in fact pass the network card back as a parameter and then only change the parts I wanted to. Additionally I found out where to get the current ChangeVersion of the VM’s configuration and when both of these combined I was able to change the network VLAN successfully without generating another NIC. The resultant code is below.

            VirtualVmxnet3 Card = (VirtualVmxnet3)NetworkCard;
            VirtualMachineConfigSpec Spec = new VirtualMachineConfigSpec();
            Spec.DeviceChange = new VirtualDeviceConfigSpec[1];
            Spec.ChangeVersion = Guest.Config.ChangeVersion;
            Spec.DeviceChange[0] = new VirtualDeviceConfigSpec();
            Spec.DeviceChange[0].Operation = VirtualDeviceConfigSpecOperation.edit;
            Spec.DeviceChange[0].Device = Card;
            ((VirtualEthernetCardNetworkBackingInfo)Spec.DeviceChange[0].Device.Backing).DeviceName = "Production";
            Guest.ReconfigVM_Task(Spec);

What gets me about this whole thing is that the VMware API says that all the other parameters are optional when its clear that there’s some unexpected behavior when they’re not supplied. Strange thing is if you check the network cards right after making this change they will appear to be fine, its only after reboot (and only on Windows hosts, I haven’t tested Linux) that these issues occur. Whether this is a fault of VMware, Microsoft or somewhere between the keyboard and chair is an exercise I’ll leave up to the reader but it does feel like there’s an issue with the VIM API. I’ll be bringing this up with our Technical Account Manager at our next meeting and I’ll post an update should I find anything out.

Flow, Optimization and Making Progress.

I believe everyone is familiar with the concept of being “in the zone”, I.E. that state you attain when you’re so intensely focused on something that time becomes irrelevant and all you’re focused on is achieving some certain goal. I personally find myself in this state quite often usually when I’m writing here, gaming or programming. Whilst I knew it was a common phenomenon I only learnt recently that its also recognised as a part of psychology, where they’ve termed it Flow. The concept itself is interesting an most recently I’ve started to grapple with one of the more subtle aspects, defined as point number 8 of conditions of Flow or “The activity is intrinsically rewarding, so there is an effortlessness of action”.

Now this weekend just gone past saw me back, as I almost always am, coding away on my PC. Now since I’m somewhat of a challenge junkie I’ll always seek out the novel parts of an application first rather than the rudimentary and the first day saw me implementing some new features. This always goes well and I’ll be firmly in Flow for hours at a time, effortlessly jumping through reams of documentation and masses of Google searches as I start to nail down my problem. Once the new feature is done of course then I’ll have to choose another to start work on, thereby maintaining my Flow and project progress.

However I’ve found that certain programming challenges are like kryptonite to achieving Flow. I discovered this on the second day of my weekend when I sat down to start work again, only to notice one of the tasks in my TODO list was to rework one of the earlier pages I had built to use less JavaScript and more ASP.NET Razor. The reasons behind this are simple: I’m really atrocious at JavaScript. The page in question looked good and did the job it was meant to but much of the content of the page was generated by some JavaScript code I had found on the Internet and hacked into working for me. This meant maintaining it was going to be an issue, so I set out to optimize it.

Of course the optimization process was fraught with the perils of trying to replicate into Razor what I had hacked into JavaScript with only a half understanding of what I was doing at the time. That meant untangling the mess of code that someone else had wrote and then translating that into another language that was more maintainable for someone like me. From a Flow perspective this kind of work isn’t very rewarding since I’m not going to achieve anything new and the benefits will only be realised by future me, that jerk who’s always off in some indeterminate time in the future. However the perfectionist in me knows that time saved at this point could mean multiples more saved later on, but therein lies the conundrum.

There’s a great quote by Donald Knuth (of The Art of Computer Programming fame) that says “Premature optimization is the root of all evil” which is basically a warning to avoid over optimizing your code when its still in the early stages. I’m a firm believer in the idea that you shouldn’t act like you have problems of scale until you have them but there are some fundamental differences between regular and scalable code that could prove to be incompatible with your codebase should you not make the decision early on in the piece. Of course optimization comes at the cost of progress on other pieces of work thus a balancing act between the two is required if your code is ever to see the light of day.

I guess I find it strange that optimizing my own code was so detrimental to achieving that state of coding nirvana. It’s quite possible that it was just the problem that I was working on as a previous optimization I had done, developing a cache system for a web service I was querying, seemed to have no ill effects. However that particular challenge was quite novel as I hadn’t created anything like it previously and the feedback was quite clear when I had finally achieved my goal. Unfortunately I have the feeling that most of the optimization problems will be more like the former example than this one, but so long as I write half decent code in the first place I hopefully won’t have to deal with them as much.

Take Off The Blinkers.

Look I’m not going to say that I’m above rabid fan boy-ism. In fact there are multiple occasions where I’ve made up excuses for some of my companies of choice (notably Bioware and Sony) but I usually at least take the time to find out all the facts before disregarding them completely. Mostly I do this so I can use my opponent’s position against them, much like I’m doing in a fledgling tweet battle with one of my friends, but if I come across a hard line fact that I can’t get across I’ll do the requisite back flip and change my position on the matter. Like I did when the iPad sold like hot cakes.

However the latest storm comes from none other than the fan boys of that company. A couple days ago Apple released their latest version of their Integrated Development Environment (IDE) Xcode 4. Personally I wasn’t excited about the release since if I had my way I’d do everything in Visual Studio (and it means yet another 4GB download, eurrgh) but some of the features piqued my interest. The integration of Interface Builder into the core Xcode application is a welcome change as well as the improvements to the debugger and an intelligent error detection engine. I haven’t yet had a go with it but the reviews I had read so far are positive so I’m sure it will make my iPhone coding life a little easier.

However Apple made the controversial move of charging $4.99 for it through the Mac App store (it’s still free to developers who are paying $99/year). Whilst the barrier to entry for Xcode is well above that thanks to the Apple hardware tax it still pissed a good number of enthusiasts off since Apple doesn’t ship a compiler with OS X, leaving many to either go without or go the dark route of installing GCC themselves. Personally I didn’t care either way since I’m already well over $4000 in the hole just for the privilege of developing for iOS but what got my goad up was when people started comparing it to Visual Studio’s pricing.

Now since Visual Studio is aimed at corporations its pricing is, how would you say, corporately priced. The cheapest version you can find on the site is $549 a whopping 110 times the price of Apple’s offering. Now whilst I could argue that the value of Visual Studio is well worth the price of admission (and it is, even if it’s just for the debugger) you’d have to be a loon to pay that price if you just wanted to develop apps for a single platform. The reason behind this is because Microsoft offers up special platform specific versions of Visual Studio for free under the Express line of their products. There are 4 different versions on there currently and combined they cover pretty much all types of development on the Windows platform. Apple does not offer Xcode free in any form anymore so realistically the comparison to Visual Studio is apples to oranges, as one is either 110 times the cost or reversed its infinitely more expensive (literally).

Perhaps I’m getting too worked up over an issue that in reality means nothing, since most people who are retweeting this nonsense are probably not developers. But still when people show a blatant disregard for simple facts (hell even a simple Google search) it gets me all kinds of angry. Couple that with a complete lack of other inspiration for today’s post and you get this ranty, nigh on pointless post about Apple fan boys. I probably shouldn’t be so angry at those people who are simply retweeting the nonsense but it’s this exact kind of me-tooism that causes the kind of zero-value blogging that’s reducing the signal on the Internet to be nigh on indistinguishable from the noise.

 

How I Killed My University Project.

It’s the beginning of 2006 and the end is in sight for my university career. It’s been a crazy 3 years up until this point having experienced both the dizzying highs of excelling in a subject and the punishing lows of failing to understand even the basic concepts of some units properly. Still I haven’t failed a single subject (despite some near misses) and really the only thing standing between me and that piece of paper I’ve been chasing is my final year, most of which will be dedicated to working on an engineering project. I had been looking forward to this for a while as I felt it would be a chance to test my meddle as a project manager and hopefully create something valuable in the process.

The year started off well as I found myself in a project team of 4 including 2 long time friends and a new acquaintance who was exceptionally skilled. After brainstorming ideas we eventually settled on creating a media PC with a custom interface based off the open source MythTV project which would handle most of the back end work for us. After getting a space to work in we covered the whiteboard in dozens of innovative ideas ranging from TiVO like recording features to remoteless operation based on tracking a user’s movement. Looking at the list we were convinced that even that list of features wouldn’t be enough to fill a year worth of development effort but thought it was best to settle on these first before trying to make more work for ourselves. With the features in mind I set about creating a schedule and we set about our work.

Initially everything was going great, we were making quite a lot of progress and the project was shaping up to be one of the best of the year. The hardware design and implementation was looking phenomenal, so much so that I made the brash move of saying there was a potential market for a mass produced version of the device. Our lecturers showed a keen interest in it and we even managed to come in second place for a presentation competition amongst all the project students, narrowly losing out to an autonomous robot that could map out and navigate its surroundings. We were definitely onto a winner with this idea.

However my desire to project manage 3 other people started to take its toll on the project. Realistically in a team of 4 everyone needs to pitch in to make sure stuff gets done, there’s really no room for designated roles. I however kept myself at arms length from any solid development work, instead trying to control the process and demanding vast reams of documentation from those doing the work. Additionally I failed to realize that the majority of the coding work was be done by a single team member which meant that only they understood it, making collaboration on it next to impossible. Seeing the beginnings of a sinking ship I called everyone together to try and figure things out, and that’s when things really started to turn sour.

The primary coder expressed their concerns that no one else was doing any work and I, still not realizing that I didn’t need to be a project manager, instructed them to take a week off so the others could get up to speed. This didn’t work as well as I planned as they continued to do all the work themselves, effectively locking anyone else out from being able to contribute to the effort. I did manage to get the star developer to collaborate with the others but by this point it was already too late as they’d usually have to rewrite any code that wasn’t their own.

In order to save some face in this whole project I elected to do the project report entirely on my own, realistically a task that needed to be done by all of us (just like the project). I spent countless hours cobbling everything together, piecing random bits of documentation and notes together into something resembling a professional report. It wasn’t amazing but it was enough to get the approval of everyone else in the team and our project co-ordinator so a week before the final demonstration I handed it in, wanting to be done with this project once and for all.

The final demonstration was no picnic either with everyone in the team (bar me) staying at university until midnight before the presentation. We managed to demonstrate a much cut down version of our initial vision to the class with only a few minor hiccups and the 2 honors side projects went along quite well. Afterwards we hurriedly bundled the project away into one of the members car (he provided all the hardware on the proviso he got to keep it) happy to be done with it once and for all.

For 2 years afterwards I struggled to figure out why the project that started off so well tanked so badly. It wasn’t until I was officially employed as a project manager that I figured out that the most toxic element in the whole ordeal was me, the power hungry idiot who contributed the least whilst ensuring that anyone trying to get things done was hampered by my interference. I failed to get everyone to collaborate effectively and hamstrung them with rediculous requirements for documentation. In essence I was acting like a project manager on a big project when really I was anything but. The end result was a far cry from what it could have been and one member of that project team still refuses to speak to me, and I don’t blame them for doing so.

I suppose the best thing that came out of this is that I finally realized my weaknesses and actively worked to overcome them. Sure it might have been too late for the university project that was but I’m glad to say I didn’t inflict any such torment on a project whilst I was being paid to do it, instead taking on board those lessons learned to make sure those projects were delivered as required. I still hold out hope that one day I’ll look back on those days with my former project members and laugh but those project management war wounds will stick with me forever, reminding me that I’m not as infallible as I once thought I was.

Absence Makes The Heart Grow Fonder (or Not).

My time spent developing my passion project hasn’t been continuous since the time I first started working on it. The first iteration lasted about a month and was a mad rush to cobble something together to mark the momentous “milestone” of 100 blog posts. I then spent the next couple months experimenting with Silverlight managing to replicate and extend the base feature set out to a point where I felt I was making progress. I then went on a 6 week hiatus from developing Geon to work on The Plan which, whilst making me a decent sized profit, never turned out to be the ticket to freedom I had hoped it would be. After taking a month off after that and coming back to look at Geon I couldn’t help but think that I was going about things in all the wrong ways, and came up with a completely new design.

This, I’ve found, is a common trend for me. Unless I continually work on a project I’ll always end up questioning the idea until I end up wondering what the point of doing it in the first place was. Initially this was quite good as whilst the first few iterations of Geon showed solid progress they were in all honesty horrid applications. However it was devastating for overall progress as the paradigm shifts I underwent during these times of developmental absence meant that the new vision was wholly incompatible with the old and I could see no way other than starting anew to get them back in line again. This is why the first2 iterations didn’t have any form of user logins and the third was such a horrible process that I don’t blame anyone for signing up for it.

I had thought that short breaks were immune to this idea as I had often taken a weekend or two off when a family event called or I was starting to feel burned out. However I hadn’t had the chance to do much work on Lobaco over the past 2 weeks thanks to me being otherwise occupied and those little tendrils of other worldly perspective started to creep in. Maybe it was the booze fueled weekend where I had a list of 5 other potentially marketable ideas or maybe it was just me pining for another break but suddenly I felt like there was so many other things I should be doing than pursuing my almost 2 year old idea. I let myself think that I could take part of the weekend off to work on one of those ideas but for some reason I just kept working on Lobaco.

I’m not sure if it was my persistence or hitting the submit on my application to Y-Combinator that did it but instead of pursuing those ideas that had tempted me all week long I just fired up Xcode and started plugging away. Whilst not my most productive weekend ever I did manage to tick off 2 more features for the iPhone client, leaving about 3 to go before my deadline of the end of March. I think the combination of a solid code base (that has all those rudimentary things done so I don’t have to spend time researching them) and almost half a year of iOS development under my belt is enough to keep the momentum going, making sure I don’t give up on this version until it reaches 1.0.

I used to think that time away from coding was just as valuable as time spent in code but that doesn’t seem to be holding as true as it used to be. Sure my first breaks led to radical changes in my vision for the end product (and is responsible for the Lobaco that exists today) but once you hit that sweet spot time away can be quite destructive, especially if you’re as prone as I am to distraction by new ideas. Thankfully the last 6 months of momentum aren’t lost on me and 2 weeks away wasn’t enough to distract me from my end goal. It would have been to easy to start procrastinating again without realizing it.

Passion Misplaced: The Dark World of Game Leaks.

The last thing you want as a developer is your code to go out into the wild before its ready. When that happens people start to build expectations on a product that’s not yet complete and will form assumptions that, for better or worse, don’t align with the vision you had so carefully constructed. Most often this happens as a result of management pressure and there’s been many a time in my career where I’ve seen systems moved up into production long before they’re ready for prime time. However the damage done there pales in comparison to that can be done to a game that’s released before its ready and I’m almost ashamed to admit that I’ve delved into this dark world of game leaks before.

The key word there is, of course, almost.

I remember my first steps into this world quite well. It was late 2002 and news began to make the rounds that someone had leaked an early alpha build of Doom 3, the next installment in the series in almost a decade. I was incredibly intrigued and began my search for the ill-gotten booty scouring the vast recesses of e-Donkey and Direct Connect, looking for someone who had the magical files. Not long after I was downloading the 380MB file over my dial up connection and I sat back whilst I waited for it to come down.

After it finished downloading I unzipped the package and waited whilst the crazy compression program they had used did its work, feverishly reassembling the code so that I could play it. This took almost an hour and the eventual result was close to double the size of the file I downloaded, something I was quite thankful for. After a few tension filled seconds of staring at the screen I double clicked the executable and I was greeted with the not yet released version of Doom 3. The game ran extremely poorly on my little box but even then I was awe struck, soaking up every second until it crashed on me. Satisfied I sank back into my chair and hopped onto Trillian to talk to my friends about what I had just seen.

It wasn’t long until I jumped back into this world again. Just under a year later rumors started to make the rounds that none other than Valve had been subjected to a sophisticated attack and the current version of Half Life 2 copied. The gaming community’s reaction was mixed as we had been promised that the game was ready to be released this year but as far as everyone could tell the current build was no where near ready. Instead of jumping straight in this time however I sat back and considered my position. Whilst I was extremely eager to see Valve’s latest offering I had seen the damage that had been done with Doom 3′s premature release and my respect for Valve gave me much trepidation when considering taking the plunge once again. Seeing the files on someone’s computer at a LAN I couldn’t let the opportunity go by and I snagged myself a copy.

The game I played back then, whilst by no means a full game, still left a long lasting impression on me. The graphics and environments were beautiful and the only level I got to work properly (I believe it was the beach level) was made all the more fun by the inclusion of the makeshift jeep. I couldn’t bring myself to play it for long though as whilst I knew that the code leak wasn’t the sole reason Valve delayed Half Life 2 I knew it wasn’t going to bring the game to me any faster. This time around I deleted my copy of the leaked game and waited patiently for its final release.

Most recently it came to my attention that the Crysis 2 source, which apparently includes the full game and a whole host of other goodies, made its way on most popular BitTorrent sites. This time around however I haven’t even bothered to go and download the game, even just for curiosity’s sake. There’s less than a month to go until the official release and really I’d rather wait that long to play it legitimately than diving back into that dark world I had left behind so long ago. The temptation was definitely there though, especially considering how much fun I had in the original Crysis, but a month isn’t a long time to wait especially with the other games I’ve got on my current backlog.

If there’s one common theme I’ve seen when these leaks come out it’s the passion that the community has for these game development companies and their flagship titles. Sure its misplaced but the fever pitch that was reached in each of these leaks shows just how much people care about these games. Whilst it might damage the project initially many of them go on to be quite successful, as both Half Life 2 and Doom 3 did. Crysis 2 should be no different but I can still understand the heartache that those developers must be going through, I don’t know what I’d do if someone nicked off with the source code to Lobaco.

Will I ever download a leaked copy of a game before it’s release? I can’t be sure in all honesty. Although I tend to avoid the hype these days I still do get really excited when I hear about some titles (Deus Ex: Human Revolution for example) and that could easily overwhelm my sensibility circuits forcing me to download the game. I do make good on purchasing the games when they’re released however and since I’m a bit of a collector’s edition nut I believe I’ve paid my penance for delving into the darker side of the gaming world. I can completely understand if game developers don’t see eye to eye with me on this issue but I hope they recognize passion, however misplaced, when they see it.

Necessity is the Mother of Invention.

I’ve been developing computer programs on and off for a good 7 years and in that time I’ve come across my share of challenges. The last year or so has been probably the most challenging of my entire development career as I’ve struggled to come to grips with the Internet’s way of doing things and how to enable disparate systems to talk to each other. Along the way I’ve often hit various problems that on the surface appear to be next to impossible to do or I come to a point where a new requirement makes an old solution no longer viable. Time has shown however that whilst I might not be able to find an applicable solution through hours of intense Googling or RTFM there are always clues that lead to an eventual solution. I’ve found though that such solutions have to be necessary parts of the larger solution otherwise I’ll just simply ignore them.

Take for instance my past weekend’s work gone by with Lobaco. Things had been going well, the last week’s work had seen me enable user sign ups in the iPhone application and had the beginnings of an enhanced post screen that allowed users to post pictures along with their posts. Initial testing of the features seemed to work well and I started testing the build on my iPhone. Quickly however I discovered that both the new features I had put in struggled to upload images to my web server, crashing whenever a picture was over 800 by 600 in size. Since my web client seemed to be able to handle this without an issue I wondered what the problem would be, so I started digging deeper.

You see way back when I had resigned myself to doing everything in JavaScript Object Notation, or JSON for short. The reason behind this was that thanks to it being an open standard nearly every platform out there has a native or third party library for serialising and deserialising objects, making my life a whole lot easier when it comes to cross platform communication (I.E. my server talking to an iPhone). Trouble with this format is that whilst it’s quite portable everything done in it must be text. This causes a problem for large files like images as they have to be changed into text before they can be sent over the Internet. The process I used for this is called Base64 and it has the unfortunate side effect of increasing the size of the file to be transferred by roughly 37%. It also generates an absolutely massive string that brings any debugger to its knees if you try to display it, making troubleshooting issues hard.

The image uploading I had designed and successfully used up until this point was now untenable as the iPhone client simply refused to play nice with ~300KB strings. I set about trying to find a solution to my problem hoping to find a quick solution to my problem. Whilst I didn’t find a good drag and drop solution I did come across this post which detailed a way in which to program a Windows web service that could receive arbitrary data. Implementing their solution as it is detailed there still didn’t actually work as advertised but after wrangling the code and over coming the inbuilt message size limits in WCF I was successfully able to upload images without having to muck around with enormous strings. This of course did mean changing a great deal of how the API and clients worked but in the end it was worth it for something that solved so many problems.

The thing is before I went on this whole adventure had you asked me if such a thing was possible I would’ve probably told you no, at least not within the whole WCF world. In fact much of my early research into this problem was centred around possibly implementing a small PHP script to accomplish the same thing (as there are numerous examples of that already), however the lack of integration with my all Microsoft solution means I’d be left with a standalone piece of code that I wouldn’t have much interest in improving or maintaining. By the simple virtue that I had to come up with a solution to this problem meant I tried my darnedest to find it, and lo I ended up creating something I couldn’t find anywhere else.

It’s like that old saying that necessity is the mother of all invention and that’s true for both this problem and Lobaco as an idea in itself. Indeed many of the current great Internet giants and start ups were founded on the idea of solving a problem that the founders themselves were experiencing and felt that things could be better. I guess I just find it fascinating how granular a saying like that can be, with necessity driving me to invent solutions at all levels. It goes to show that embarking into the unknown is a great learning experience and there’s really no substitute for diving in head first and trying your hardest to solve an as of yet unsolvable problem.

Fast Scrolling UITableView: Updates for iOS 4.2.

I’ll be honest and say that most of the programs I’ve built have never really been that resource intensive so optimising them for performance really hadn’t been much of a priority. Sure there were the occasional thing that I’d catch and try to improve, like when an early copy of Geon had a dropped shadow around the map that inexplicably made it run like a dog, but for the most part I’d just code them up and leave it at that. Coding for the iPhone and other resource poor systems however does not afford me such luxuries and performance tuning the app has taken up a considerable amount of my development time, but the pay offs have been quite great.

After getting my first shot at the Lobaco app up and running I noticed there was considerable slow down when scrolling through the main list of items. Since I’m a big fan of the official Twitter app I knew that it was possible to have quite smooth scrolling even when you had multiple images and gobs of text on the screen. As it turns out I wasn’t alone with this performance problem with UITableViews (the class used for that main list display) and the developers behind it posted up some code to demonstrate how they achieved such fast scrolling.

If you follow that link you’ll notice that that particular blog post is now over 2 years old, back when the iPhone 3G was still the top offering from Apple. Whilst the code given in that blog post still functions I ran into a couple of issues implementing it in the latest SDK (4.2). The first issue you’ll hit when trying to use this code is the initWithFrame function, which is used to create your cell, is now deprecated. Whilst it should still function I could not get my code to work until I made the following change in ABTableViewCell.m:

// Copyright (c) 2008 Loren Brichter
//
// Permission is hereby granted, free of charge, to any person
// obtaining a copy of this software and associated documentation
// files (the "Software"), to deal in the Software without
// restriction, including without limitation the rights to use,
// copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following
// conditions:
//
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
// OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
// HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
// WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
// OTHER DEALINGS IN THE SOFTWARE.
//
//  ABTableViewCell.m
//
//  Created by Loren Brichter
//  Copyright 2008 Loren Brichter. All rights reserved.
//

#import "ABTableViewCell.h"

@interface ABTableViewCellView : UIView
@end

@implementation ABTableViewCellView

- (void)drawRect:(CGRect)r
{
	[(ABTableViewCell *)[self superview] drawContentView:r];
}

@end

@implementation ABTableViewCell

/*- (id)initWithFrame:(CGRect)frame reuseIdentifier:(NSString *)reuseIdentifier
{
    if(self = [super initWithFrame:frame reuseIdentifier:reuseIdentifier])
	{
		contentView = [[ABTableViewCellView alloc] initWithFrame:CGRectZero];
		contentView.opaque = YES;
		[self addSubview:contentView];
		[contentView release];
    }
    return self;
}*/

- (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier
{
	if(self = [super initWithStyle:style reuseIdentifier:reuseIdentifier])
	{
		contentView = [[ABTableViewCellView alloc] initWithFrame:CGRectZero];
		contentView.opaque = YES;
		contentView.backgroundColor = [UIColor whiteColor];
		[self addSubview:contentView];
		[contentView release];
    }
    return self;
}

- (void)dealloc
{
	[super dealloc];
}

- (void)setFrame:(CGRect)f
{
	[super setFrame:f];
	CGRect b = [self bounds];
	b.size.height -= 1; // leave room for the seperator line
	[contentView setFrame:b];
}

- (void)setNeedsDisplay
{
	[super setNeedsDisplay];
	[contentView setNeedsDisplay];
}

- (void)drawContentView:(CGRect)r
{
	// subclasses should implement this
}

@end

The main change is to replace the old initWithFrame with the new initWithStyle. This also requires changing the super call to the UITableViewCell class we’re subclassing, but apart from that everything else remains the same. Once I had that problem out of the way my custom cells were now drawing properly and appeared to be scrolling much more smoothly than they were before. However I was noticing another strange issue with my cells, they seemed to be displaying data at random from my data array. Try as I might to find the solution to this problem I couldn’t, until went back to the fundamentals of the UITableView.

You see creating cells with a UITableView is a pretty expensive process, just as it is for any system when creating new objects. This is even more pronounced with the resource limitations of the iPhone and so the iOS SDK employs a simple trick to work around this. Instead of creating and deleting a new cell every time one is needed it will instead reuse a cell that’s no longer in use, I.E. one that’s scrolled off screen. Since the cell will usually have new data in it at this point when it comes back on screen it should redraw itself to reflect this. However it seems that the ABTableViewCell class wasn’t doing this and the only way I could get it to update the data was by clicking on the cell, which caused a refresh.

If you’re not using this class then you’ll probably never encounter this issue and I believe this is because of the way ABTableViewCell does it’s drawing. You see in order to get the performance improvement you’re basically bypassing the regular way of drawing the cell and doing it yourself. This has enormous performance benefits since you’re not doing any unnecessary drawing, but it appears that the UITableViewCell class doesn’t call the drawContentView function as part of its normal drawing routine anymore. Thankfully this can be solved with a one liner in your UITableView controller class by letting the cell know it needs to redraw itself with setNeedsDisplay:

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {

    static NSString *CellIdentifier = @"Cell";
	int nodeCount = [displayItems count];

    LobacoTableCell *cell = (LobacoTableCell *)[tableView dequeueReusableCellWithIdentifier:CellIdentifier];
    if (cell == nil) {
        //cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease];
		cell = [[[LobacoTableCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:CellIdentifier] autorelease];
    }

    // Configure the cell...

	if (nodeCount > 0)
	{
		Post *post = [displayItems objectAtIndex:indexPath.row];
		cell.post = post;
		if (!post.profileImage)
		{
			if (self.tableView.dragging == NO && self.tableView.decelerating == NO)
			{
				[self startImageDownload:post forIndexPath:indexPath];
			}

			cell.image = [UIImage imageNamed:@"Placeholder.png"];

		}
		else
		{
			cell.image = post.profileImage;
		}
	}
	[cell setNeedsDisplay];
    return cell;
}

I do this after I’ve done all the reconfiguration of the cell so that it’s drawn with all the correct information. The image code in this part will also trigger a redraw of the cell when it’s finished downloading the image (in this case the user’s profile picture) ensuring that it’s displayed immediately rather than when the drops out and comes back into view again. With all these fixes in place my new custom UITableViewCell works perfectly and the scrolling performance is glassy smooth.

All of the above issues I encountered after I upgraded my Xcode installation to iOS 4.2 and despite my intense Googling I couldn’t find any real solutions to these problems. If you’re a budding iPhone developer like me struggling to figure out why some things just aren’t working the way they should I hope this post gives you a little insight into what was going wrong and ultimately how to fix it. It’s these kinds of curious problems that frustrate the hell out of me when I’m in them but they’re always quite satisfying once you’ve managed to knock them over.

WTF Was This Guy Doing: My Refactoring Experience.

It doesn’t take much to sending me on a coding spree. Sometimes is something as simple as an idea that I need to implement now since it fundamentally changes the way the application will evolve and other times it’s something right out of the blue. Last night was the latter as after finishing up some preliminary packing for my trip to the US on Sunday (stay tuned for pictures, posts and vlogs!), playing a couple games of Starcraft 2 I found myself watching the latest episode of the Random Show. In essence it’s just Tim Ferriss (4 Hour Work Week) and Kevin Rose (Digg founder) talking about all sorts of things, but a common theme is always that of entrepreneurship. As someone who’s aspiring to that lifestyle I’m usually fixed firmly to the screen, hoping for some gems that will help me along my merry way. Last night however provided something completely different.

After listening to them for quite a while I looked down at my notepad with a list of features that I’ve slated for integration into Lobaco. I’ve deliberately let them go by the wayside as feature creep is the easiest way to kill a product before it even gets off the ground. Couple that with the fact that I only just recently had the penny drop on iPhone development the less ambitious I make the first iteration of the product the more likely I am to make it solid and usable, rather than a total mess of half done features. Still there are a couple on there that are wholly web client based so feeling the entrepreneurial surge from two web start-up powerhouses I thought I should go ahead an knock a couple of them over.

Boy was I in for a surprise.

One feature which was easy and would make the UI slightly more complete was making the right hand side information section scale dynamically with the browser’s height. In essence this is so you can see more if you’ve got a larger screen and makes the UI look a bit better on smaller screens. Since Silverlight supports dynamic height scaling by simply not specifying a height I thought that all I’d need to do was remove the static height and I’d be done, leaving me to knock over another feature before bed. Changing the property lead to the list box scaling out to its full height and refusing to show a scroll bar, and left me scratching my head as to what was going wrong.

Diving into the code I noticed that whatever I set the height to in the class file would determine the height of the list box. Thinking that it would just be a matter of setting that to the available height would give me the behaviour I wanted I coded up a loop that set that height whenever the size of the browser window changed. This kind of worked but never scaled properly, despite my beautifully crafted logic statements. Something was definitely amiss, but it took me another 2 hours to track down what it was.

Essentially it was a clusterfuck of 3 different coding screw ups. The first was placing the custom class I had designed inside a list box, which was in essence wrapping itself in itself. The second was actually using that class in the first place as it was not required and also duplicated a ton of styling logic thanks to the way Expression Blend messes with your code. Lastly, instead of adding items directly into the list box itself I was creating yet another list box, adding items into that and then adding that entire list box into the main list box (which was wrapped in yet another list box). To get dynamically scaling height in that mess would’ve required setting the height in about 3 different locations consecutively, an expensive process for something that’s supported natively.

The thing is this component was one of the first things to be coded into Lobaco about 3 months ago so this issue has been there almost from day dot. I’ve looked at that code dozens of times over the course of developing this application and not once did it twig that was I was doing was completely ass backward. It’s been almost a month and a half since I did any serious work on the web client and it seems that time away has given me enough perspective to see those obvious mistakes. I think that all developers need time away from their projects in order to get their head out of the problem space and get a clearer perspective on what they’re doing. Hell I’d say that those breaks I took from developing Lobaco were wholly responsible for the 3 code dumps I did and the current polished version that’s on the web today.

In the end the whole development process has been one of the most gratifying learning experiences I’ve ever had. It seems every time I think I’ve got things down pat I learn something new that makes me rethink my past decisions, tweaking things so they’re just that little bit better. Whilst I’m sure that this code base is here to stay it’s definitely evolving as time goes on as each change builds upon the last to provide a better experience for my future users. I won’t be making any progress on it for the next month whilst I travel the US but I’ve got the feeling in that time I’ll get enough perspective to make some incredible changes to Lobaco and hopefully I’ll come back recharged enough to hit development with renewed vigour.