Posts Tagged‘change’

Chang'e 3

Chang’e 3 Launches: China’s First Luna Rover.

For a country that was barred from ever working with the leader in space technology the progress China has made in the last decade has been incredibly impressive. They’ve quickly gone from humble beginnings in 2003 where their first taikonaut made it into orbit to a fully fledged space station in 2013, showing that they have the technical expertise required to consistently attempt envelope pushing activities. Of course whilst the most interesting aspect of any space program is the manned activities (who doesn’t love seeing people in space!) there’s always the quiet sibling in the robotics departments, attempting missions that few humans will be able to attempt. I must admit that until today I was also ignorant of China’s robotic efforts in space but suffice to say they’re just as impressive as their human based accomplishments.

Chang'e 3China’s Chang’e program (the name of the Chinese Goddess of the Moon) is a series of lunar spacecraft tasked with creating highly detailed maps and models of the Moon’s surface with the intent that that data will be used for future manned missions. Chang’e 1 was launched back in 2007 and remained in lunar orbit for 2 years. It created the most accurate and detailed sufrace map of the moon to date and, once it was done, plummeted into the surface it just mapped to send up a spray of regolith that could be studied from here on Earth. It’s successor, Chang’e 2, was launched in 2010 and had similar capability (albeit with higher resolution instruments and a lower orbit) but instead of being plunged into the moon at the end of its mission it was instead sent out to do a flyby of asteroid 4179 Toutatis. Its current trajectory will eventually see it hit interstellar space however its likely it’ll run out of fuel long before that happens and the purpose of the extend mission is to validate China’s Deep Space Tracking network.

Chang’e 3, launched just yesterday, will be the first craft China has ever launched that will land on the Moon’s surface. For a first attempt it’s a fairly ambitious little project consisting of both a lander and a rover, whereas similar missions usually go for a lander first prior to attempting a rover. The lander is an interesting piece of equipment as it contains a RTG as a power source as well as an ultra-violet telescope, making it the first luna based observatory. Whilst it won’t be anything like the Hubble or similar space telescopes it will still be able to do some solid science thanks to its location and it makes the lander’s useful life much longer than it typically would be.

The rover is just as interesting, being roughly equivalent to the Mars Exploration Rovers (Spirit and Opportunity) in terms of size and weight. It can provide real time video back to Earth and has sample analysis tools on board. The most important instrument it carriers however is a radar on its base allowing it to probe the lunar surface in a level of detail that hasn’t been done before, giving us insights into the make up of the regolith and the crust beneath it. It will be interesting to see what its longevity will be like as its power source is its solar panels (unlike its parent lander) and the lack of atmosphere should mean they’ll remain clean for the forseeable future.

As of right now there’s another 2 more missions in the Chang’e line both of which have similar capabilities with the exception of Chang’e 5 which will be a lunar sample return mission. After that it’s expected that China will start to eye off manned lunar missions, starting with the traditional flag planting operations and then quickly escalating to a fully fledged moon base not long after. It’s quite possible that they’ll accomplish that within the next 2 decades as well as their past accomplishments show how quickly they can churn out envelope pushing missions, something that other space fairing nations have been lacking as of late.

Whilst it might not be of the same heights we saw during the cold war there’s definitely another space race starting to heat up, although this time it’s between the private space industry and China. Whilst it’s likely that China will win the race to the Moon and possibly Mars I can’t help but feel that the private industry isn’t too far behind. Heck, combine Bigelow Aerospace and SpaceX and you’ve already got the majority of the Chinese manned program right there! Still this does not detract from the accomplishments the Chinese have made and I only hope that eventually the USA changes its stance on co-operating with them.

Software Patents

New Zealand Bans Software Patents.

Have you ever read a software patent? They’re laborious things to read often starting out by describing at length their claims and then attempting to substantiate them all with even more colourful and esoteric language. They do this not out of some sick pleasure they get from torturing people who dare to read them but because the harder it is to compare it to prior art the better chance it has of getting through. Whilst a Dynamic Resolution Optimizer Algorithm might sound like something new and exciting it’s quite likely that it’s an image resizer, something that is trivial and has tons of prior art but, if such a patent was granted, would give the owner of it a lot of opportunity to squeeze people for licensing fees.

Software Patents

Image Credit: Mishi from OpenSource.com

Indeed this kind of behaviour, patenting anything and everything that can be done in software, is what has allow the patent troll industry to flourish. These are companies that don’t produce anything, nor do they use their patents for their intended purpose (I.E. a time limited monopoly to make use of said patent), and all they do is seek licensing fees from companies based who are infringing on their patent portfolio. The trouble is with the patent language being so deliberately obtuse and vague it’s nigh on impossible for anyone creating software products to not infringe on one of them, especially if they’re granted for things which the wider programming community would believe would be obvious and trivial. It’s for this reason that I and the vast majority of people involved in the creation of software oppose patents like these and it seems finally we may have the beginnings of support from governmental entities.

The New Zealand parliament just put the kibosh on software patents in a 117-4 vote. The language of the bill is a little strange essentially declaring that any computer program doesn’t classify as an invention however any computer application that’s an implementation of a process (which itself can be patented) is patentable. This legislation is also not retroactive which means that any software patents granted in New Zealand prior to its passing will remain in effect until their expiry date. Whilst this isn’t the kind of clean sweep that many of us would have hoped for I think it’s probably the best outcome we could realistically hope for and the work done in New Zealand will hopefully function well as a catalyst for similar legislation to be passed elsewhere.

Unfortunately the place that it’s least likely to happen in is also the place where it’s needed the most: the USA. The vast majority of software patents and their ensuing lawsuits take place in the USA and unfortunately the guaranteed way of avoiding infringement (not selling your software there) means cutting out one of the world’s largest markets. The only way I can see the situation changing there is if the EU passed similar laws however I haven’t heard of them attempting to do anything of the sort. The changes passed in New Zealand might go a ways to influence them along the same lines, but I’m not holding my breath on that one.

So overall this is a good thing however we’re still a long way off from eradicating the evils of software patents. We always knew this would be a long fight, one that would likely take decades to see any real progress in, but the decision in New Zealand shows that there’s a strong desire from the industry for change in this area and people in power are starting to take notice.

The Code Behind The Refined Geek

New Server, New Theme, New Beginnings.

On the surface this blog hasn’t changed that much. The right hand column had shifted around a bit as I added and subtracted various bits of social integration but for the most part the rest of the site remained largely static. Primarily this was due to laziness on my part as whilst I always wanted to revamp it I could just never find the motivation, nor the right design, to spur me on. However after a long night spent perusing various WordPress theme sites I eventually came across one I liked but it was a paid one and although I’m not one to shy away from paying people for their work it’s always something of a barrier. I kept the page open in Chrome and told myself that when it came time to move servers that’d be the time I’d make the switch.

And yesterday I did.

The Code Behind The Refined Geek

My previous provider, BurstNET, whilst being quite amazing at the start slowly started to go downhill as of late. Since I’d been having a lot of issues, mostly of my own doing, I had enlisted Pingdom to track my uptime and the number of reports I got started to trend upwards. For the most part it didn’t affect me too much as most of the outages happened outside my prime time however it’s never fun to wake up to an inbox full of alerts so I decided it was time to shift over to a new provider. I had had my eye on Digital Ocean for a while as they provide SSD backed VPSs, something which I had investigated last year but was unable to find at a reasonable price. Plus their plans are extraordinarily cheap for what you get with this site coming to you via their $20/month plan. Set up was a breeze too, even though it seems every provider has their own set of quirks built into their Ubuntu images.

The new theme is BlogTime  from ThemeForest and I chose it precisely because it’s the only one I could find that emulates the style you get when you login to WordPress.com (with those big featured images at the top with a nice flat layout). The widgets he provides with the theme unfortunately don’t seem to work, at least not in the way that’s advertised, so I had to spend some time wrestling with the Facebook and Twitter widget APIs to get them looking semi-decent on the sidebar. Thankfully it seems the “dark” theme on both sites seems to match the background on here quite well otherwise I would’ve had to do a whole bunch of custom CSS stuff that I just wasn’t in the mood for last night. Probably the coolest thing about this theme is that it automatically resizes itself depending on what kind of device you have so this blog should look pretty much the same no matter how you’re viewing it.

I also took the opportunity to try and set up caching again and whilst it appeared to work great last night upon attempting to load my site this morning I found that I was greeted with an empty response back. Logging into the WordPress dashboard directly seemed to solve this but I’m not quite sure what caused W3 Total Cache to cause my site to serve nothing for the better part of 5 hours. For the moment I’ve disabled it as the site appears to be running quite fine without it but I’ll probably attempt to get one of them running again in the future as when they’re working they really are quite good.

Does this change in face mean there’s going to be a radical change in what this site is about? I’m not intending to as whilst my traffic has been flagging of late (and why that is I couldn’t tell you) this was more a revamp that was long overdue. I’d changed servers nearly once a year however I had not once changed the theme (well unless you count the Ponies incident) and it was starting to get a little stale, especially considering it seemed to be the theme of choice for a multitude of other tech blogs I visited. So really all that’s changed is the look and the location that this blog is coming to you from, everything else is pretty much the same, for better or for worse.

Changing the User Paradigm with Windows 8.

As any IT admin will tell you users aren’t really the best at coping with change. It’s understandable though, for many people the PC that they use in their everyday work is simply a tool with which to accomplish their required tasks, nothing more. Fundamentally changing the way that tool works means that they also have to change the way they work and often this is met with staunch resistance. As such it’s rather difficult for new paradigms to find their feet, often requiring at least one failed or mediocre product to be released in order for the initial groundwork to be done and then the next generation can enjoy the success that its predecessor was doomed to never achieve.

We don’t have to look that far into the past to see an example of this happening. Windows Vista was something of a failure commercially which can be traced to 2 very distinct issues. The first, and arguably the most important, was the lack of driver support from vendors leaving many users with hardware that simply couldn’t run Vista even if it was technically capable of doing so. The second was the major shift in the user experience with the start menu being completely redesigned and many other parts of the operating system being revamped. These 2 items were the 1-2 knock-out punch that put Vista in the graveyard and gave Windows 7 one hell of an up hill battle.

Windows 8, whilst not suffering from the driver disaster that plagued Vista, revamps the user experience yet again. This time however it’s more than just a simple splash of eye candy with a rearranging of menu items, it’s a full on shift in how Windows PCs will be used. Chief amongst these changes is the Metro UI which after being field tested on Windows Phone 7 handsets has found its way onto the desktop and any Windows powered device. Microsoft has made it clear that this will be the way they’ll be doing everything in the future and that the desktop as we know it will soon be fading away in favour of a Metro interface.

This has drawn the ire of IT professionals and it’s easy to see why. Metro is at its heart designed for users, taking cues from the success that Apple has achieved with its iOS range of products. However whilst Apple is happy to slowly transform OS X into another branch of their iOS line Microsoft has taken the opposite approach, unifying all their ecosystems under the one banner of Metro (or more aptly WinRT). This is a bold move from Microsoft essentially betting that the near future of PC usage won’t be in the desktop sense, the place where the company has established itself as the dominant player in the market.

And for what it’s worth they’re making the right decision. Apple’s success proves that users are quite capable (and willing) to adapt to new systems if the interfaces to them are intuitive, minimalistic and user focused. Microsoft has noticed this and it is looking to take advantage of it by providing a unified platform across all devices. Apple is already close to providing such an experience but Microsoft has the desktop dominance, something that will help them drive adoption of their other platforms. However whilst the users might be ready, willing and able to make the switch I don’t think Windows 8 will be the one to do it. It’s far more likely to be Windows 9.

The reasoning behind this is simple, the world is only just coming to grips with Windows 7 after being dragged kicking and screaming away from Windows XP. Most enterprises are only just starting to roll out the new operating system now and those who have already rolled out don’t have deployments that are over a year old. Switching over to Windows 8 then is going to be something that happens a long way down the line, long enough that many users will simply skip upgrading Windows 8 in favour of the next iteration. If Microsoft sticks to their current 3 year release schedule then organizations looking to upgrade after Windows 7 won’t be looking at Windows 8, it’s far more likely to be Windows 9.

I’m sure Microsoft has anticipated this and has decided to play the long game instead of delaying fundamental change that could put them seriously behind their competition. It’s a radical new strategy, one that could pay them some serious dividends should everything turn out the way they hope it will. The next couple years are going to be an interesting time as the market comes to grips with the new Metro face of the iconic Windows desktop, something which resisted change for decades prior.

VMware Capitulates, Shocking Critics (Including Me).

It’s a sad truth that once a company reaches a certain level of success they tend to stop listening to their users/customers, since by that point they have enough validation to continue down whatever path suits them. It’s a double edged sword for the company as whilst they now have much more freedom to experiment since they don’t have to fight for every customer they also have enough rope to hang themselves should they be too ambitious. This happens more in traditional business rather than say Web 2.0 companies since the latter’s bread and butter is their users and the community that surrounds them, leaving them a lot less wiggle room when it comes to going against the grain of their wishes.

I recently blogged about VMware’s upcoming release of vSphere 5 which whilst technologically awesome did have the rather unfortunate aspect of screwing over the small to medium size enterprises that had heavily invested in the platform. At the time I didn’t believe that VMware would change their mind on the issue, mostly because their largest customers would most likely be unaffected by it (especially the cloud providers) but just under three weeks later VMware has announced that they are changing the licensing model, and boy is it generous:

We are a company built on customer goodwill and we take customer feedback to heart.  Our primary objective is to do right by our customers, and we are announcing three changes to the vSphere 5 licensing model that address the three most recurring areas of customer feedback:

  • We’ve increased vRAM entitlements for all vSphere editions, including the doubling of the entitlements for vSphere Enterprise and Enterprise Plus.

  • We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license.

  • We adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & dev environments for example.

The first 2 points are the ones that will matter to most people with the bottom end licenses getting a 33% boost to 32GB of vRAM allocation and every other licensing level getting their allocations doubled. Now for the lower end that doesn’t mean a whole bunch but the standard configuration just gained another 16GB of vRAM which is nothing to sneeze at. At the higher end however these massive increases start to really pile on, especially for a typical configuration that has 4 physical CPUs which now sports a healthy 384GB vRAM allocation with default licensing. The additional caveat of virtual machines not using more than 96GB of vRAM means that licensing costs won’t get out of hand for mega VMs but in all honesty if you’re running virtual machines that large I’d have to question your use of virtualization in the first place. Additionally the change from a monthly average to a 12 month average for the licensing check does go some way to alleviating the pain that some users will feel, even though they could’ve worked around it by asking VMware nicely for one of those unlimited evaluation licenses.

What these changes do is make vSphere 5 a lot more feasible for users who have already invested heavily in VMware’s platform. Whilst it’s no where near the current 2 processors + gobs of RAM deal that many have been used to it does now make the smaller end of the scale much more palatable, even if the cheapest option will leave you with a meagre 64GB of RAM to allocate. That’s still enough for many environments to get decent consolidation ratios of say 8 to 1 with 8GB VMs, even if that’s slightly below the desired industry average of 10 to 1. The higher end, whilst being a lot more feasible for a small number of ridiculously large VMs, still suffers somewhat as higher end servers will still need additional licenses to fully utilize their capacity. Of course not many places will need 4 processor, 512GB beasts in their environments but it’s still going to be a factor to count against VMware.

The licensing changes from VMware are very welcome and will go a long way for people like me who are trying to sell vSphere 5 to their higher ups. Whilst licensing was never an issue for me I do know that it was a big factor for the majority and these improvements will allow them to stay on the VMware platform without having to struggle with licensing concerns. I have to then give some major kudos to VMware for listening to their community and making these changes that will ultimately benefit both them and their customers as this kind of interaction is becoming increasingly rare as time goes on.

I Think I’m A Challenge Junkie.

My last two years have seen me dabble in a whole swath of things I never thought I’d dip my toes into. The first was web development, arguably inspired by this blog and the trials and tribulations that went into making it what it is today. Having been out of the development game for quite a long time before that (3 years or so) I had forgotten the thrill of solving some complex problem or finding an elegant solution to replace my overly complicated one. This then led me to try a cascade of different technologies, platforms and frameworks as ideas started to percolate through my head and success stories of overseas start ups left me lusting for a better life that I could create for myself.

For each of these new technologies I pursued I always had, at least in my mind, a good reason for doing so. Web development was the first step in the door and a step towards modernizing the skills I had let decay for too long. Even though my first foray into this was with ASP.NET, widely regarded as the stepping stone to the web for Windows desktop devs like myself, I still struggled with many of the web concepts. Enter then Silverlight, a framework which is arguably more capable than but has the horrible dependency of relying on an external framework. Still it was enough to get me past the hurdle of giving up before I had started and I spent much of the next year getting very familiar with it.

Of course the time then came when I believed that I needed to take a stab at the mobile world and promptly got myself involved in all things Apple and iOS. For someone who’d never really dared venture outside the comfortable Microsoft world it was a daunting experience, especially when my usual approach of “Attempt to do X, if can’t Google until you can” had me hitting multiple brick walls daily. Eventually however I broke through to the other side and I feel it taught me as much as my transition from desktop to web did. Not long after hitting my stride however did I find myself deep in yet another challenge.

Maybe it was the year+ I had spent on Lobaco without launching anything or maybe it was the (should have been highly expected) Y-Combinator rejection but I had found myself looking for ideas for another project that could free me from the shackles of my day job. Part me also blamed the frameworks I had been using up until that point, cursing them for making it so hard to make a well rounded product (neglecting the fact that I only worked on weekends). So of course I tried all sorts of other things like Ruby on Rails, PHP and even flirted with the idea of trying some of those new fangled esoteric frameworks like Node.js. In the end I opted for ASP.NET MVC which was familiar enough for me to be able to understand it clearly without too much effort and modern enough that it didn’t feel like I’d need to require IE6 as the browser.

You’re probably starting to notice a pattern here. I have a lot of ideas, many of which I’ve actually put some serious effort into, but there always comes a point when I dump both the idea and the technology it rests on for something newer and sexier. It dawned on me recently that the ideas and technology are just mediums for me to pursue a challenge and once I’ve conquered them (to a certain point) they’re no longer challenge I idolized, sending me off to newer pastures. You could write off much of this off to coincidence (or other external factors) except that I’ve done it yet again with the last project I mentioned I’m working on. I’m still dedicated to it (since I’m not the only one working on it) but I’ve had yet another sexy idea that’s already taken me down the fresh challenge path, and it’s oh so tempting to drop everything for it.

I managed to keep my inner junkie at bay for a good year while working on Lobaco so it might just be a phase I’m going through, but the trend is definitely a worrying one. I’d hate to think that my interest only lasts as long as it takes to master (well, get competent with) and it would be a killer for any potential project. I don’t think I’m alone in this boat either, us geeks tend to get caught up in the latest technology and want to apply it where ever we can. I guess I’ll just have to keep my blinkers on and keep at my current ideas for a while before I let myself get distracted by new and shiny things again. Hopefully that will give me enough momentum to overcome my inner challenge junkie.

bitcoin market depth

BitCoin’s Saviour: The Bursting Bubble.

My opinion hasn’t changed much in the month since I wrote my first post on how I think BitCoin is a pyramid scheme, ultimately destined to unravel unceremoniously when all the speculative investors decide to pull the plug and cash out of the BitCoin market. Still the discussion that that post spawned was quite enlightening, forcing me to clarify many points both in my own head and here on my blog. Since then there’s been a deluge of other blogs and press chiming in with similar opinions about BitCoin and how its intended purpose is far from its reality. There’s been enough noise about BitCoin’s issues that last week saw the first major dip in the exchange rate, and it hasn’t been smooth sailing since.

The image above is the historic trading price for BitCoins to USD on the biggest BitCoin exchange Mt.Gox. The BitCoin “Black Friday” can be seen as the first dip following the massive peak at around $30. Since that day BitCoin has been shedding value constantly with the latest bid offers hovering around the $18 mark. This is not the kind of volatility you see in something you’d class as a currency where single percentage changes are cause for concern and usually government intervention. In the space of a week BitCoin has shed almost half of its peak value which in any sane market would have seen suspension of trade to prevent a fire sale of the asset. The market isn’t showing any signs of recovering either as the market depth report from Mt.Gox shows:

There’s a very large discrepancy between the majority of seller’s idea of how much BitCoin is worth and what the market is willing to pay for it. The vast majority of sellers are looking to cash out at the mid-twenties range when the highest buy offer doesn’t even break the $20 mark. Any rational actor in this sort of market would be looking to get out before the market wipes out all of their value completely and for what its worth I believe the main speculators have probably already withdrawn from the market which is what triggered the initial dip in price. Liquidity in the BitCoin market is fast drying up and that will only serve to drive the price back to (or even below) its initial stable equilibrium.

On the surface this would appear to be the beginning of the end for BitCoin since confidence in the currency is rapidly disappearing with all the accumulated wealth that’s being lost to the diving market. However whilst many who were hoping to make their riches with a nascent currency might be finding themselves short changed the diving price of BitCoins means that those who were working against the currencies intentions, I.E. those who were using it as a speculative investment vehicle, are more likely to leave the market alone now that it’s been pumped and dumped. Once the price retreats back to more stable levels BitCoin could then start functioning as it is supposed to, as a vehicle for wealth that has no central authority regulating it.

It’s not going to be an easy road for BitCoin and its adopters though as confidence in the currency has been dashed with even some of its earliest supporters withdrawing from it. Mining will then no longer be a profit driven enterprise, instead run by those who support the idea and large companies like Mt.Gox who run exchanges. Once the idea that BitCoin’s value would ever be increasing has dissipated we may finally see a point where BitCoins are primarily used as a vehicle for value transfer and not speculative investment. It will probably be another month or two before we reach a new stable equilibrium in the BitCoin market but after that I might finally stop harping on about it being an elaborate (though probably unintentional) scheme.

This still doesn’t detract from the concentration of wealth for early adopters in the BitCoin ecosystem but once their incentive to hoard currency has vanished then the impact of their vast BitCoin stashes means a whole lot less than it did during the speculative price explosion. This will encourage them to put those BitCoins into circulation adding much needed liquidity to the market and hopefully restoring some more faith in the system. Time will tell if this works out however as with market volumes so low on the BitCoin exchanges price manipulation is bound to happen from time to time and realistically can only be solved by having wider adoption. I’m still not convinced that BitCoin is a safe place for any of my wealth currently but once its recovered from this rapidly bursting bubble I may revisit it, should the want arise.

 

This Isn’t The Microsoft I Know…

You’d be forgiven for thinking that Microsoft was never a major player in the smartphone space. Most people had never really heard of or seen a smartphone until Apple released the iPhone and the market really didn’t heat up until a couple years after that fact. However if you were to go all the way back to 2004 you’d find they were extremely well positioned, capturing 23% of the total market share with many analysts saying that they would be leader in smartphone software by the end of the decade. Today however they’re the next to last option for anyone looking for a smartphone thanks wholly to their inertia in responding to the incoming threats from Apple and Google.

Microsoft wasn’t oblivious to this fact but their response took too long to come to market to save any of the market share they had previously gained. Their new product, Windows Phone 7, is quite good if you consider it on the same level as Android 1.0 and the first iPhone. Strangely enough it also suffers some of the problems that plagued the earlier revisions of its competitors products had (like the lack of copy and paste) but to Microsoft’s credit their PR and response time on the issue is an order of magnitude better. They might have come too late into the game to make a significant grab with their first new offering but as history has shown us Microsoft can make a successful business even if it takes them half a decade of losses to catch up to the competition (read:the Xbox).

More recently though I’ve noticed a shift in the way Microsoft is operating within their mobile space. Traditionally, whilst they’ve been keen to push adoption for their platform through almost any means necessary, they’ve been quick to stand against any unsanctioned uses of their products. You can see this mentality in action with their Xbox department who’s fervently fought any and all means to run homebrew applications on their consoles. Granted the vast majority of users modding their consoles do so for piracy reasons so their stance is understandable but recent developments are starting to show that they might not be adverse to users running homebrew applications on their devices.

ChevronWP7 was the first (and as far as I know, only) application to allow users to to jailbreak their WP7 devices in order to be able to load arbitrary applications onto them. Microsoft wasn’t entirely happy with it’s release but didn’t do anything drastic in order to stop its development. They did however announce that the next update to WP7 would see it disabled, much like Apple does with their iOS updates, but they did something that the others haven’t ever done before, they met with the ChevronWP7 team:

After two full days of meetings with various members of the Windows Phone 7 team, we couldn’t wait to share with everyone some results from these discussions.

To address our goals of homebrew support on Windows Phone 7, we discussed why we think it’s important, the groups of people it affects, its direct and indirect benefits and how to manage any risks.

With that in mind, we will work with Microsoft towards long-term solutions that support mutual goals of broadening access to the platform while protecting intellectual property and ensuring platform security.

Wait, what? In the days gone by it wouldn’t have been out of place for Microsoft to send out a cease and desist letter before unleashing a horde of lawyers to destroy such a project in its infancy. Inviting the developers to your headquarters, showing them the roadmap for future technologies and then allying with them is down right shocking but shows how Microsoft has come to recognise the power of the communities that form around the platforms they develop. In all respects those users of ChevronWP7 probably make up a minority of WP7 users but they’re definitely amongst the most vocal users and potentially future revenue generators should they end up distributing their homebrew into the real world. Heck they’re even reaching out to avid device hacker Geohot since he mentioned his interest in the WP7 platform, offering him a free phone to get him started.

The last few years haven’t been kind to Microsoft in the mobile space and it appears that they’re finally ready to take their medicine so that they might have a shot at recapturing some of their former glory. They’ve got an extremely long and hard fight ahead of them should they want to take back any significant market share from Apple or Google, but the last couple months have shown that they’re willing to work with their users and enthusiasts to deliver products that they and hopefully the world at large will want to have. My next phone is shaping up to be a WP7 device simply because the offering is just that good (and development will be 1000x easier) and should Microsoft continue their recent stint of good behaviour I can only see it getting better and better.

382px-tom_anderson

The Changing Face of the Modern Geek.

Just as the IT industry continues to reinvent itself every 10 years so it also appears do the people in that industry. Whilst the term IT is relatively new when compared to many other trades it has still managed to capture a stereotype. What is interesting however is how the image of the typical IT geek has progressed over the past few decades from a lab worker to now something completely and utterly different.

ibm_7030

Image courtesy of the Computer History Museum.

In the early days of large computational clusters many technicians would look like this. Well dressed and with an almost business like demeanour. It was part of the culture back then as many of these types of systems were either for large universities or corporations, and with big dollars being shelled out for such systems (this was the IBM 7030 Stretch which would cost around $100 million in today’s dollars) this was kind of expected. I think that’s why the next generation of geeks set the trend for the next couple decades.

bill-gates-marketing

Image courtesy of Microsoft.

A young Bill Gates shows what would become the typical image conjured up in everyone’s heads when the word geek or nerd was uttered for a long time to come. The young, tall and skinny people who delved themselves into computers were the faces of our IT community for a long time, and I think this is when those thick rimmed glasses became synonymous with our kind. It was probably around this time that geeks became associated with a tilt towards social awkwardness, something that many people still joke about today. What’s really interesting though is the next few steps I’ve seen in the changing geek image.

davidfilojerryyangImage courtesy of JustTheLists.

Jerry Yang and David Filo, the first of a generation of what most people call Internet pioneers. Whilst I can’t find a direct link to it Yahoo had a bit of a reputation for a very casual work environment, with t-shirts and sandals the norm. It was probably because of their success from coming straight out of university and into a successful corporate world, where they grew their own business culture. This kind of thing flowed onto many of the other successful Internet companies like Google, who lavishes their employees with almost everything they will ever need.

382px-tom_anderson

Image Courtesy of Robert Scoble.

Tom Anderson, one of the co-founders of MySpace is not what you’d call your typical geek with a degree in Arts and a masters in Film. You’d struggle to find him even associated with such titles, yet he’s behind one of the largest technical companies on the Internet. Truly the face of the modern geek aspires to something more like Tom Anderson then it does to a young Bill Gates.

I found this interesting because of the company that I keep. We all love computer games and the latest bits of tech, but you’d be hard pressed to find among us anyone you could really call your stereotypical geek. I think this is indicative of the maturity that the IT industry has acquired. The term IT Professional no longer conjures up an idea of a basement dwelling console hacker with thick glasses, more it gives the impression that you’d expect from a professional in any industry. Something which carries with it a decent chunk of respect.

I guess the next step is when we start seeing Joe the IT Professional used in political campaigns.