Have you ever read a software patent? They’re laborious things to read often starting out by describing at length their claims and then attempting to substantiate them all with even more colourful and esoteric language. They do this not out of some sick pleasure they get from torturing people who dare to read them but because the harder it is to compare it to prior art the better chance it has of getting through. Whilst a Dynamic Resolution Optimizer Algorithm might sound like something new and exciting it’s quite likely that it’s an image resizer, something that is trivial and has tons of prior art but, if such a patent was granted, would give the owner of it a lot of opportunity to squeeze people for licensing fees.
Indeed this kind of behaviour, patenting anything and everything that can be done in software, is what has allow the patent troll industry to flourish. These are companies that don’t produce anything, nor do they use their patents for their intended purpose (I.E. a time limited monopoly to make use of said patent), and all they do is seek licensing fees from companies based who are infringing on their patent portfolio. The trouble is with the patent language being so deliberately obtuse and vague it’s nigh on impossible for anyone creating software products to not infringe on one of them, especially if they’re granted for things which the wider programming community would believe would be obvious and trivial. It’s for this reason that I and the vast majority of people involved in the creation of software oppose patents like these and it seems finally we may have the beginnings of support from governmental entities.
The New Zealand parliament just put the kibosh on software patents in a 117-4 vote. The language of the bill is a little strange essentially declaring that any computer program doesn’t classify as an invention however any computer application that’s an implementation of a process (which itself can be patented) is patentable. This legislation is also not retroactive which means that any software patents granted in New Zealand prior to its passing will remain in effect until their expiry date. Whilst this isn’t the kind of clean sweep that many of us would have hoped for I think it’s probably the best outcome we could realistically hope for and the work done in New Zealand will hopefully function well as a catalyst for similar legislation to be passed elsewhere.
Unfortunately the place that it’s least likely to happen in is also the place where it’s needed the most: the USA. The vast majority of software patents and their ensuing lawsuits take place in the USA and unfortunately the guaranteed way of avoiding infringement (not selling your software there) means cutting out one of the world’s largest markets. The only way I can see the situation changing there is if the EU passed similar laws however I haven’t heard of them attempting to do anything of the sort. The changes passed in New Zealand might go a ways to influence them along the same lines, but I’m not holding my breath on that one.
So overall this is a good thing however we’re still a long way off from eradicating the evils of software patents. We always knew this would be a long fight, one that would likely take decades to see any real progress in, but the decision in New Zealand shows that there’s a strong desire from the industry for change in this area and people in power are starting to take notice.
My followers on Twitter will be aware that for the past few weeks I’ve been working with a couple other guys on building a 3D printer, namely a RepRap Longboat Prusa. I’ve been interested in them for a long time, mostly because they tickle my sci-fi nerd side just right, but apart from endlessly fantasizing about them I hadn’t really pursued them further. One of my long time gamer friends asked me late last year if I’d be interested in going halves for a kit. After I mentioned the idea to another friend he jumped on board as well and the 3 of us waited eagerly for the kit to arrive.
In total we’ve spent about 48 man hours total over 3 days putting it together, getting the wiring done and then troubleshooting the software and interfaces. It’s been an eye opening experience, one that challenged my electronics knowledge like it hasn’t been in quite a few years, and the result is what you see below:
We decided not to attempt to print anything since at this point it was getting close to midnight and we didn’t want to keep the Make Hack Void space open any longer than we already had. But from seeing it do the dry run it appeared to be functioning correctly (it’s printing a small cup in the video) albeit a little stiff at some points. We think that’s due to 2 things, the first being that the large gear on the extruder platform is warped slightly and sometimes hits the mounting hardware near it. Secondly we were running the steppers at a low voltage to begin with so with a little more juice in them we’ll probably see them become more responsive. We’ve still yet to print anything with it but the next time we get together you can guarantee that will be pretty much all we’ll do after we’ve spent so long on getting it running.
What this project opened up my eyes to was that although there’s a torrent of information available there’s no simple guide to go from beginning to end. Primarily this is because the entire movement is completely open source and the multitude of iterations available means there’s near endless numbers of variations for you to choose from. Granted this is probably what a lot of the community revels in but it would be nice if there was some clear direction in going from kit to print, rather than the somewhat organized wiki that has all the information but not all in a clear and concise manner.
The software for driving the machines is no better. We started off using the recommended host software which is a Java app that for the most part seems to run well. At the moment though it appears to be bugged and is completely unable to interface with RepRap printers, something we only discovered after a couple hours of testing. RepSnapper on the other hand worked brilliantly the first time around and was the software used to initiate the dry run in the video above. You’ll be hard pressed to find any mention of that particular software in the documentation wiki however which is really frustrating, especially when the recommended software doesn’t work as advertised.
I guess what I’m getting at here is that whilst there’s a great community surrounding the whole RepRap movement there’s still a ways for it to go. Building your own RepRap from scratch, even from a kit, is not for the technically challenged and will require you to have above entry level knowledge of software, electronics and Google-fu. I won’t deny that overcoming all the challenges was part of the fun but there were many road blocks that could have been avoided with better documentation with overarching direction.
All that being said however it’s still incredible that we were able to do this when not too long along the idea of 3D printing was little more than a pipe dream. Hopefully as time goes on the RepRap wiki will mature and the process will be a little more pain free for other users ,something I’m going to contribute to with our build video (coming soon!).
My post last week about the trials and tribulations of sorting ones media collection struck a chord with a lot of my friends. Like me they’d been doing this sort of thing for decades and the fact that none of us had any kind of sense to our sorting systems (apart from the common thread of “just leave it where it lies”) came at something of a surprise. I mean just taking the desk I’m sitting at right now for an example it’s clear of everything bar computer equipment and the stuff I bring in with me every day. The fact that this kind of organization doesn’t extend to our file systems means that we either simply don’t care enough or that it’s just too bothersome to get things sorted. Whilst I can’t change the former I decided I could do something about the latter.
So my quest last week proving fruitless I set about developing a program that could sort media based on a couple cues derived from the files themselves. Now for the most part media files have a few clues as to what they actually are. For the more organized of us the top level folder will contain the episode name but since mine was all over the place I figured it couldn’t be trusted. Instead I figured that the file name would be semi-reliable based on a cursory glance at my media folder and that most of them were single strings delimited with only a few characters. Additionally the identifier for season and episode number is usually pretty standard (S01E01, 2×01,1008, etc) so that pulling the season out of them would be relatively easy. What I was missing was something to verify that I was looking in the right place and that’s where I TheTVDB comes in.
The TV Database is like IMDB for TV shows except that it’s all community driven. Also unlike IMDB they have a really nice API that someone has wrapped up in a nice C# library that I could just import straight into my project. What I use this for is a kind of fuzzy matching filter for TV show names so that I can generate a folder with the correct name. At this point I could also probably rename the files with the right name (if I was so inclined) but for the point of making the tool simple I opted not to do this (at this point). With that under my belt I started on the really hard stuff: figuring out how to sort the damn files.
Now I could have cracked open the source of some other renaming programs to see how they did it but I figured out a half decent process after pondering the idea for a short while. It’s a multi-stage process that makes a few assumptions but seems to work well for my test data. First I take the file name and split it up based on common delimiters used in media files. Then I build up a search string using those broken up names stopping when I hit a string that matches a season/episode identifier. I then add that into a list of search terms to query for later, checking first to see if it’s already added. If it’s already in there I then add the file path into another list for that specific search term, so that I know that all files under that search term belong to the same series. Finally I create the new file location string and then present this all to the user, which ends up looking like this:
The view you see here is just a straight up data table of the list of files that Sortilio has found and identified as media (basically anything with the extension .avi or .mkv currently) and the confidence level it has in its ability to sort said media. Green means that in the search for the series name it only found one match, so it’s a pretty good assumption that it’s got it right. Yellow means that when I was doing a search for that particular title I got multiple responses back from TheTVDB so the confidence in the result is a little lower. Right now all I do is take the first response and use that for verification which has served me well with the test data, but I can easily see how that could go wrong. Red means I couldn’t find any match at all (you can see what terms I was searching for in the debug log) and everything marked like that will end up in one giant “Unsorted” folder for manual processing. Once you hit the sort button it will perform the move operations, and suffice to say, it works pretty darn well:
Of course it’s your standard hacked-together-over-the-weekend type deal with a lot of not quite necessary but really nice to have features left out. For starters there’s no way to tell it that a file belongs to a certain series (like if something is misspelled) or if it picks the wrong series to tell it to pick another. Eventually I’m planning to make it so you can click on the items and change the series, along with a nice dialog box to search for new ones should it not get it right. This means you might want to do this on a small subset of your media each time (another thing I can code in) as otherwise you might get files ending up in strange folders.
Also lacking is any kind of options page where you can specify things like other extensions, regex expressions for season/episode matching and a whole host of other preferences that are currently hard coded in. These things are nice to have but take forever to get right so they’ll eventually make their way into another revision but for now you’re stuck with the way I think things should be done. Granted I believe they’ll work for the majority of people out there, but I won’t blame you if you wait for the next release.
Finally the code will eventually be open sourced once I get it to a point where I’m not so embarrassed by it. If you really want to know what I did in the ~400 odd lines that constitute this program then shoot me an email/twitter and I’ll send the source code to you. Realistically any half decent programmer could come up with this in half the amount of time I did so I can’t imagine anyone will need it yet, unless you really need to save 3 hours 😛
So without further ado, Sortilio can be had here. Download it, unleash it on your media files and let me know how it works for you. Comments, questions, bugs and feature requests can be left here as a comment, an @ message on Twitter or you can email me on [email protected].
I was a beautifully warm night in late December down in Jervis Bay. My friends and I had rented a holiday house for the New Years weekend and we had spent most of the day drinking and soaking in the sun, our professional lives a million miles away. We had been enjoying all manner of activities from annoying our significant others with 4 hour bouts of Magic: The Gathering to spending hours talking on whatever subject crossed our minds. Late into the evening, in a booze fueled conversation (only on my end, mind you), we got onto the subject of Agile development methodologies and almost instantly I jumped on the negative bandwagon, something that drew the ire of one my good mates.
You see I’m a fan of more traditional development methodologies, or at least what I thought were traditional ideas. You see I began my software development career back in 2004, 3 years after the Agile Manifesto was brought into existence. Many of the programming methodologies I was taught back then centered around iterative and incremental methods and using them in our projects seemed to fit the bill quite well. There was some talk of the newer agile methods but most of it was written off as being experimentation and my professional experience as a software developer mirrored this.
My viewpoint in that boozy conversation was that Agile methodologies were a house of cards, beautiful and seemingly robust if there’s no external factors on them. This hinged heavily on the idea that some of the core Agile ideals (scrum, pair programming and it’s inherit inability to document well) are detrimental in an environment with skilled programmers. The research done seems to support this however it also shows that there are significant benefits for average programmers, which you are more likely to encounter. I do consider myself to be a decently skilled programmer (how anyone can be called a programmer and still fail FizzBuzz is beyond me) which is probably why I saw Agile as being more of a detriment to my ability to write good code.
After taking a step back however, I realised I was more agile than I previously thought.
There are two methodologies I use when programming that are included in the Agile manifesto. The first of these is Extreme Programming which underpins the idea of “release early release often”, something I consider to be necessary to produce good working code. Even though I don’t announce it on this blog every time I get a new feature working I push it straight into product to see how it fairs in the real world. I also carry with me a copy of the latest iPhone client for testing in the wild to make sure the program will function as expected once its deployed. Whilst I don’t yet have any customers other than myself to get feedback from it still keeps me in the mindset of making sure whatever I produce is workable and viable.
The second is Feature Driven Development, a methodology I believe that goes hand in hand with Extreme Programming. I usually have a notepad filled with feature ideas sitting in front of me and when I get time to code I’ll pick one of them and set about getting it implemented. This helps in keeping me from being distracted from pursuing too many goals at once, making sure that they can all be completed within a certain time frame. Since I’m often coding on the weekends I’ll usually aim to get at least one feature implemented per weekend, accelerating towards multiple features per weekend as the project approaches maturity.
Whilst I haven’t yet formally conceded to my friend that Agile has its merits (you can take this post as such, Eamon ;)) after doing my research into what actually constitutes an Agile methodology I was surprised to find how much in common I shared with them. Since I’m only programming on my own at the moment many of the methods don’t apply but I can’t honestly say now that when I go about forming a team that I won’t consider using all of the Agile Manifesto to start off with. I still have my reservations about it for large scale solutions, but for startups and small teams I’m beginning to think it could be quite valuable. Heck it might even be the only viable solution for small scale enterprises.
Man I hate being wrong sometimes.
Whilst I’m no stranger to the business world I’m still a new player when it comes to developing usable products for a wide audience. My years of training as an engineer and short stint as a project manager gave me a decent amount of insight into designing products and services for a customer who’s shovelling requirements at you but when it comes to designing something to requirements that are somewhat undefined you can imagine I found myself initially dumbfounded. It’s one thing to have an idea in your head, bringing it kicking and screaming into the real world is another.
For the most part I began with an initial concept and started to flesh it out as best I could. The original idea behind Geon was (in my head) called “What’s Going On?” whereby you could plonk down an area on a map and send a question to everyone running the application in the area. The people in the area then could, if they so wanted, respond back via their phone client with some text, image or video. The main idea was to get people communicating and secondary to that would be supplemental information from other sources. After socializing the idea a bit people seemed to think it would be an interesting service (although most declined to make serious comment until after they saw it in action) and the closest competitors looked to be throw-away applications that probably took the developers a couple weeks to slap together. Things were looking good, so I started hacking away.
Behold the horror that was my first attempt, something that I almost foolishy went ahead to try and promote amongst my favourite tech sites. The first iteration was a horrible compilation of ASP.NET and various client libraries that I managed to scrounge from all over the Internet. For the most part it worked as intended, being able to pick up information from various sources depending on your location. The problem was however it was ugly, unintuitive and relied rather heavily on my poor little web server to do all the heavy lifting. Additionally after walking a blogger friend of mine through using it he immediately suggested a couple features that had just never crossed my mind and upon consideration would be absolutely essential in high information density areas. They were so good that even the latest incarnation of Geon incorporates his suggestions.
Looking back over all my experience in designing solutions I realised that I had always been spoiled by having the problem handed to me on a silver platter. When you’re working for a client it’s pretty easy to figure out what they need when they’re telling you at every turn what they want. Sure it might be a hassle to make sure that they properly define their requirements but at least you have a definitive information source on what will constitute a successful outcome. When you’re working to develop something that you’re not quite sure who your client will be the game changes, and you find yourself looking around for answers to questions that might never have been asked before. Right now I find the majority of my answers through other people’s web services, hoping that emulating some of their characteristics will bring along with it some of their success.
At the core of all this is the software development philosophy of release early, release often. Whilst my product isn’t probably ready for prime time the more I show it to people who will (hopefully) end up as my users the more insight I get into what I should and shouldn’t be doing with it. Even better was discussing it with some of my proper software engineering friends who suggested different ways of doing some things which not only simplified my code (to the order of hundreds of lines, thanks Brett ;)) but also opened up services that up until now seemed baffling in the way they returned their data. I guess the lesson to take away from this is that the more you collaborate with others the better your end product will be which is hard for someone who’s as protective of his creations as I am.
I know I harp on a lot about Geon on this blog (and I’m sure you guys are sick of hearing about it!) but it has been the source of many eye opening moments and its all too easy to get caught up in the excitement of sharing something I created with the world. I was never that creative (I can’t draw, I’m not a very sporty person and my music creation skills have been in hiding since my debut song Chad Rock (that’s an anagram of the real title, FYI) has earned me unwanted infamy in my group of friends) and apart from this blog I’ve never really had any other creative outlets. I guess I just want to let the wider world how exciting it is to create something, even if I sound like a hyperactive 2 year old with a new toy 😉
Plus the more I talk about it the more likely I am to work on it, since I feel guilty for being all talk and no action.
I’m not what you’d call typical when it comes to my taste in music. Whilst I can easily identify the kinds of music I like (trance, dance and all that. You know, pops/clicks/whistles stuff) I don’t really listen to much of the top 40 or anything similar. If the radio is on in the car on the way to work it’s usually tuned to Triple J, mostly because of their enigmatic hosts and interesting news programs. However recently whilst over at a friends house I was introduced to the current Top 40 on MTV’s music list, and something caught my ear.
One thing that I’m a sucker for in any kind of music is well done vocoding. Making people sound like instruments triggers something in my head that just makes me like the music, no matter who is singing it. I think this is what attracted me to Daft Punk in the first place, as their album Discovery made heavy use of vocoding. Take a gander over at the current ARIA Top 50 and the song currently at the top is The Black Eyed Peas’ Boom Boom Pow. Here’s the film clip to give you an idea of what I’m getting at:http://www.youtube.com/watch?v=9F444CELomo
Another one that’s apparently been staying high in the chart’s is the Pussy Cat Dolls Jai Ho (You Are My Destiny):http://www.youtube.com/watch?v=VrVlBrooxcM
Now these songs aren’t exactly vocoded, but they use something that runs along very similar lines and indeed most artists who were using such effects for a long time were using vocoders. The product I’m referring to is AutoTune, made famous most recently by T-Pain who has used it extensively through all his songs. This I believe is what has lead to many of these chart topping hits to start using it again not only to give the singers perfect pitch, but to also give them that vocoded “Cher” effect that everyone is talking about.
It was an interesting bit of technology for me to come across. I initially heard about it through a few news articles mentioning its wide spread use throughout the pop music industry. Since I had dabbled in music creation before I knew once I had a fiddle with the software (which I did, and it’s very interesting) I could easily identify who was using it and who wasn’t. After sitting through about 10 songs of the top 40 I was surprised at just how many of them were not only using it, but blatantly copying each other’s effects.
I guess this is indicative of what pop culture encapsulates. New and different doesn’t last that long as it either goes one of two ways. The first the “new” idea is something that catches on and then everyone else in the industry tries to emulate that success. The second being that it isn’t popular and it falls by the wayside, forgotten until someone tries it again. The use of Autotune to produce pitch perfect and augmented vocals for songs used to be a small niche typically relegated to the electronic and alternative music styles. Thanks to the popularisation from T-Pain and other AutoTune cohorts we’re seeing everyone latching onto this idea. However, I can’t help but think that this is only a temporary phase and given another year or two there will be another popular sound or effect that will start making the rounds.
For now though I don’t mind people abusing this piece of tech at all. Whilst the songs I’ve posted aren’t my usual kind of thing they’re easy for me to listen to, and I enjoy the effect that AutoTune provides. Granted there are some instances where they should be banned from AutoTune life for trying to (and horribly failing) emulate the more experienced players, but you’re bound to get that with anything popular. It seems though for at least a little while longer I may be delving into the realms of popular music and seeing if they attempt to be innovative with this new bit of pop tech, or they just keep abusing it like they do with everything else in their industry.
UPDATE: It has come to my attention that the owners of the videos posted don’t like embedding. I’ve added links to the videos so you may click through to them. Enjoy!
One of the biggest struggles that the software industry faces is that of the not-so-underground pirate market. Whilst this used to be confined to certain countries and small close social groups over the years more and more we’re seeing piracy becoming more mainstream. Gone are the days when only the technically elite had the means and motivation to copy untold millions of dollars worth of software and we now herald the days when anyone with a quick google search and a hunger for something free can get what they want.
So what can you do in a market where people will have your product despite having not paid for it? Simple, convert those people (who would probably not buy your software anyway even if it was “unpiratable”) into your unruly mass of beta testers. How would you go about something like this? Well Microsoft certainly has a novel way of recuriting beta testers:
The Release Candidate is now available to MSDN and TechNet subscribers, and will go on unlimited, general release on 5 May.
The software will not expire until 1 June 2010, giving testers more than a year’s free access to Windows 7.
“It’s available to as many people who see fit to use it, although we wouldn’t recommend it to just your average user,” John Curran, director of the Windows Client Group told PC Pro. “We’d very strongly encourage anyone on the beta to move to the Release Candidate.”
Being a beta tester of Windows 7 myself I can attest to the high build quality of the current release, and if the previous builds are any indication the RC will be a very polished operating system. This is the kind of thing that could lure those devilish pirate users away from their current installs of Windows, which suffer from not being able to patch or download Microsoft value-add software, onto a new system where they’re basically a fully paid Microsoft customer. Not to mention some of the other perks from other companies offering things like free antivirus, yet again another perk from something that’s completely free.
Another bit of evidence that seems to lend credence to this theory is the fact that even months after Microsoft pulled the keys from their Windows 7 registration site the torrent for the latest build still remains up for all to download and play with. Whilst you may take the risk of downloading a pre-loaded trojan Microsoft was kind enough to provide a SHA-1 hash of the builds for everyone allowing you to verify that your downloaded file is genuine. It also takes a bit of load away from Microsoft, who should have considered releasing an official torrent in the first place.
So what do they have to lose by switching across? For the most part they might have some issues with their legacy bits of software and possibly hardware incompatibility issues. When I first installed Windows 7 most of my hardware had drivers already available for Windows 7 and if they failed the Vista drivers worked (albeit with a few tweaks). Since they are now technically customers of Microsoft they can ask for support for their problems, something which before would probably involve them trolling through endless web searches hoping someone else had their issue.
Doing this kind of long beta is however a double-edged sword. As many software developers have found when you provide your software ahead of time to the general public this always gives the hackers and crackers a head start on your copy protection mechanisms¹. By the time Windows 7 hits the stores the activation scheme will be well known and Microsoft will be a step behind in the ever raging arms race with the pirates. It also takes away from a lot of the hype about the product, since everyone who would be buying this product would probably already have it installed.
For Microsoft this is making the best of a bad situation, and overall it’s a good move for them. Whilst the rate wouldn’t be high I’m sure there were some people running a previous (pirated) version of Windows that will consider forking over some cash for the new version once they’ve played with it for a year. Additionally the corporate sector will have a long time to prepare for Windows 7, easing the transition pain some what.
I know I’ll be running it for the coming year 🙂
¹ Whilst I can’t find a good link on one of the techniques I used to hear of I’ll attempt to explain it here. Many game development companies would provide a demo or trial version a few weeks before official release in order to generate a bit of hype. Usually this would involve a lot of the production code and most of the time this wouldn’t contain the DRM or genuine copy verification mechanisms in it. Many would be hackers would then use the files in the demo to create cracks for the retail versions, sometimes by just simply copying the main executable from the trial over the top of the retail version.