Monthly Archives: December 2009

Fenomen_over_Borras_340152c

Norway’s Puzzling Lightshow.

If there’s one thing I’m a sucker for it’s a good light show. When I used to live out in the country I had an absolutely fantastic view of the night sky and I can remember spending many nights laying on top of our concrete water tank staring up at the sky. I even remember waking up at 2am one morning to spend a chilly couple hours watching an extremely active meteor shower (I can’t remember exactly which one it was, but it most likely the Leonids). You can then imagine my excitement when I heard of this spectacular light show that dazzled Norway:

Fenomen_over_Borras_340152c

Apparently, this is not a Photoshopped image, as there are several more just like it, taken from various locations. This morning in northern Norway, people saw a strange light in the sky which shocked residents and so far, the phenomenon has yet to be explained. This picture was taken from a pier, looking to the east, approximately at 07.50 am local time. “I can imagine that it went on for two, three minutes,” said the photographer Jan Petter Jørgensen. “It was unbelievable. I was quite shaken when I saw it.”

“It consisted initially of a green beam of light similar in colour to the aurora with a mysterious rotating spiral at one end,” said another eyewitness, Nick Banbury of Harstad, quoted on Spaceweather.com. “This spiral then got bigger and bigger until it turned into a huge halo in the sky with the green beam extending down to the . According to the press, this could be seen all over northern norway and must therefore have been very high up in the atmosphere to be seen hundreds of km apart.”

It didn’t take long for people to speculate that this was probably due to some kind of rocket launch. As it turns out (as shown in the article) it was in fact a Russian Bulava missile which has had quite a checkered past. It’s designed to penetrate missile defense systems by being able to withstand a nuclear blast from 500m away as well as launching up to 6 warheads plus countermeasures all from the one launch vehicle. It’s an impressively scary beast to behold, except for the fact that 7 out of its 13 launches have ended in failure. Still this missile is interesting for other reasons as well, namely because of the way it failed.

When you’re designing a new launch system you’re pretty guaranteed to have a fair few failures in the initial design and testing stages. NASA was known for putting on quite a show back in their hay days, launching every other week which usually ended up with the payload exploding mid flight. Whilst it would appear from the ground that this was because of some failure on the rocket’s part in general when a launch system explodes during testing its usually the ground crew telling the rocket to explode. They do this in order to protect us civilians since a rocket capable of orbital (or even sub-orbital) speeds could very easily make its way into populated territory. If the rocket is misbehaving the last thing you want it doing is flying until its out of fuel, so the next best thing is to make it self destruct before it can do any damage.

Looking at this picture it would appear that whoever was testing this rocket either didn’t have the capability to initiate a self destruct or simply choose not to. The reports I’ve been reading regarding the incident have Russia admitting to it being a failure of the third stage which caused the rocket to tumble out of control. Since it had completed its first two stages it was probably several hundred kilometers above the Earth (its highest point is 1000km according to Astronautix) so letting it spiral out of control was probably the best option for it. Detonating at that height would give rise to quite a bit of debris, some of which would gain the required momentum to achieve orbit (albeit unstable).

You might then be thinking: why didn’t they just shut the damn thing off? Well therein lies the problem. Much like the two boosters that are strapped to the side of the rust coloured fuel tank of the shuttle the Bulava missile is what we call a solid rocket booster. The entire propulsion system is basically a giant firework and once its lit it can’t really be put out except when it explodes. So once the Bulava was in the air it was either keep flying until it was out of fuel or blow up spectacularly, either way we were in for a bit of a light show. Russia’s choice of failure mode was probably the safest, and no one will deny it was the coolest.

Even though Russia was testing something designed specifically for war, something I’m usually against, I still can’t help but marvel at it. It’s a decidedly Russian design and despite its failures Bulava is still quite an awesome piece of tech to behold. Even more so when it puts on a show like this for us

The iPhone’s Curious Flash Immunity.

No matter where you go on the web you’re almost guaranteed to run into a Flash object or two. Primarily this is because Flash has the awesome ability of being delivered through a web browser but still has most of the functionality of a installed application. Such capability lead itself to become ubiquitous on the web but the real king of bringing Flash to the public was the video giant, YouTube. Right now it’s almost guaranteed that anyone surfing the net has Flash installed so the web has become increasingly Flash based with everything from advertisments to this blog using it. So it becomes a real curiosity when the world’s most popular smartphone, the iPhone, lacks the capability to use Flash.

Searching around for the answer leads to a lot of interesting articles. Many of them show that Adobe is keenly interested in getting Flash to work on the iPhone and has been in talks with Apple for quite a while. Whilst Adobe has been pretty open about the process Apple had taken their usual vow of silence on the matter for quite a long time. More recently however Steve Jobs came out and said that it was just too slow to run on the iPhone and that it would detract from the overall experience. Couple this with the fact that Flash might impact poorly on the iPhone’s battery life and you’ve got a recipe for a significant amount of damage to be done to Apple’s “it just works” reputation. I can’t fault them from wanting to protect that.

Enabling Flash would also commit one of the worst sins against Apple: taking away control of its platform. The reason why Apple tech just works the way it does is because almost all of their products would be, to draw a gaming analogy, are closer to consoles than they are PCs. The hardware in any Apple product is strictly controlled which means the software can then be coded directly against that platform. This reduces bugs, increases turn around time for new software and is the main reason that Apple has developed such a good reputation for delivering complex tech that used to be reserved to the annals of only the highest rank techies. Allowing things like Flash or Silverlight would allow anyone to run code that was not only not approved by Apple, but also not designed specifically for the iPhone platform. I can easily see people trying to use the latest Flash games on their iPhone only to be completely disappointed, tarnishing the iPhone’s shining exterior.

However there’s one thing that Apple values above its image, and that’s its bottom line. Apple has managed to almost 10% share of the PC market (and 13.7% of the smartphone market) away from the corporate giant of Microsoft. Whilst I’m sure they derive a decent profit stream from the hardware sales (the bill of materials for an iPhone tops out under US$180 and it sells outright for almost double that) the real money is in the App Store. For Apple this is an absolutely brilliant strategy: want to have something run on our hardware with an install base of over 21 million? Sure! For a yearly fee and 30% of all revenue you make through our system it can be done. This is almost pure profit for Apple since once the initial cost of setting up the store is recovered and I’d doubt that more than 10% of the revenue generated from the app is used on auditing it. While Apple is tight lipped on just how much they’re making estimates have pegged it at a whopping $2.4 BILLION a year. Even at half or a quarter of that revenue the App Store is a gravy train for Apple, since they’re doing comparatively little work for an enormous benefit.

So we should not be suprised when iPhone after iPhone is released and Flash doesn’t make an appearance with it. The potential hit to Apple’s reputation and revenue is too great for them to cave into the Internet’s most popular client side programming framework. It’s a bit of a shame but in the end when you’ve got enough market sway to make the biggest flash based site in the world convert all their videos so that they’ll work on your mobile device I don’t think anyone will argue with your decision.

They may deconstruct your decision for blog fodder however ;)

Smartphone Virtualization? Oooooh shiny…

I love virtualization, really I do. Ever since my first encounter with it back in university when I didn’t have the spare cash to build another PC to run Linux so I could compile my projects at home I’ve had a fondness for it and the flexibility it provides. This web page is coming to you from a virtualized Server 2008 instance on VMware’s vSphere 4 and the switch from workstation was both painless and fruitful. So when VMware announced a while back that they were planning to do the same thing with smartphones I was excited, but back then with Android still being a small player I wrote it off as cool but probably not something I’d want or need. Recent news however has changed my mind:

VMware has flagged smartphones as the next platform in the evolution of virtualisation, but at least one major competitor, Microsoft, says that it sees no demand for the technology.

Speaking to Computerworld, Srinivas Krishnamurti, VMware’s head of mobile phone virtualisation said the company’s vision for virtualisation on smartphones went beyond the basic dual-boot prototypes currently in development to one that ran both a private and work operating system and profile at the same time.

“We don’t think dual booting will be good enough – we’ll allow you to run both profiles at the same time and be able to switch between them by clicking a button,” he said. “You’ll be able to get and make calls in either profile – work or home – as they will both be live at any given point in time.”

Bringing virtualization to the smartphone platform opens up some very interesting possibilities. The first thing that comes to mind is that for developers like me who want to target all the major platforms (Windows Mobile, Android and iPhone) we have the potential of loading up several phone OSs on our hardware, allowing us to quickly test against real hardware. Whilst I’m sure that Apple won’t release an iPhone image to use with it there’s still quite a bit of value in being able to quickly test on real hardware. The simulators only go so far.

The other interesting thing that might be possible would be the integration of this virtualization with some of VMware’s current line of products, like VMware View. In essence view decouples the OS from the underlying hardware and the bulk of the hard work is done by a backend server. It’s reminiscent of the old days of dumb terminals hooked up to a giant mainframe however it has the benefit of user’s data being centrally located (and protected) whilst giving them the flexibility to say, move from office to office and take their desktop with them. The same could potentially be done with smart phones which would give admins unprecedented control over their user’s mobile environment. RIM and Microsoft give you a pretty decent amount of control over your user’s phones already, but something like this integrated with view would allow you to see what you’re user is seeing on their phone (like RDP for phones). I can bet there’s more than a few admins who would like that.

It’s also one of those products that lets you get more out of your hardware, something I’m very fond of. Whilst I’m not going to be constantly switching between OSs I can easily see myself hearing about a new cool app on the Android marketplace and wanting to switch over to try it. VMware are currently marketing it as having one image for work and one for home which is a damn good idea when you consider that many companies will require encryption on your device if it has work emails on it. If I could avoid having to put my PIN in every time I wanted to use my phone by having a second OS then I’d be all over it.

As with most of VMware’s products it will take a while to find its place in the world. I’d be guessing that the first few versions will work as advertised on certain handsets until they get some real demand for it. Right now it seems to be firmly stuck in the developer’s plaything market but as it matures I can see quite a few awesome possibilities that could turn your regular old smartphone into something that could almost qualify as a pocket desktop replacement.

I’ll be keeping my eye on them for the next year, that’s for sure.

Facts Abound, Fear Remains.

Once something is ingrained in the public’s mind it becomes increasingly difficult to convince them of the opposite idea. Initial thoughts turn into innate biases and anecdotal evidence becomes undeniable fact. I can’t really put the whole blame on the public themselves since we don’t all spend the hours required to fact check everything so some of the blame rests with the media and their reporting of such things. One of these such things is the link between mobile phones and cancer which, despite a fair body of evidence to the contrary, still manages to rear its ugly head at the dinner table. Even with evidence like this people will still choose to believe the anecdotes over fact:

A very large, 30-year study of just about everyone in Scandinavia shows no link between mobile phone use and brain tumours, researchers reported on Thursday.

Even though mobile telephone use soared in the 1990s and afterward, brain tumours did not become any more common during this time, the researchers reported in the Journal of the National Cancer Institute.

Some activist groups and a few researchers have raised concerns about a link between mobile phones and several kinds of cancer, including brain tumours, although years of research have failed to establish a connection.

What interests me the most about this is that although people will still spout things like “cell phones cause cancer” they will still go ahead and use them day after day. I think the main reason behind this is the fact that although there might be a chance that it does increase your risk of cancer (most of the studies still conclude that the 20~30 year usage range needs further studies) it is so low that it doesn’t really affect them. The same can be said for smoking and unhealthy eating since for the most part the damage is so low and slow that you don’t notice it building up on you. This was very true with cigarettes 50 years ago when doctors would recommend them to their patients, not knowing the long term health problems the addictions would incur. The mental gymnastics people employ for their self destructive habits is quite amazing sometimes.

The real issue here is one of education since the method of communication (mass media et al) with the public at large is not particularly suited towards this kind of critical thinking. This has become quite apparently recently with the whole Emissions Trading Scheme legislation which, thanks to an almost soap opera-esque leadership spill in the Liberal party, has pushed Tony Abbott and his bizarre ideals on climate change. Right now it appears he’s attempting to make it look like the Rudd government is trying to tax us all for no appreciable benefit, when he can do the same for basically free. Trying to find some solid information on his policy leads me to mostly dead ends but the few articles I could find on it would see Abbott attempt massive carbon sequestering, something which does not solve the underlying problem. Let’s also not forget that Abbott has also promoted a climate change denier in the form of Nick Minchin (to call him a skeptic is completely misleading), a man who 14 years ago was a second hand smoke “skeptic”. He’s right up there with the other loonies who believe that this whole carbon thing is an attempt to deindustrialize the western world (and bring in communism, that’s right climate change is a COMMUNIST CONSPIRACY!!). You can see why I’m worried about these people pushing their views on the wider public of Australia, they’re disregarding all evidence in favour of pushing party lines.

I’m just glad that they’ll go down in flames come the next election .

Whilst there are many great educational and skeptical resources available out there most of them aren’t really targetted at the everyman. Skeptics et al have a terrible habit of preaching to the choir

and their rhetoric leaves much to be desired. When your target audience thinks that Ask Bossy is good lunchtime reading you’ve got to change your game plan to match, and that’s a process that many of us (myself included) find quite hard to do. The day that skepticism becomes sexy and cool is the day that I stop writing on the subject, since everyone will be doing my work for me.

Or maybe the ABC just needs to move Media Watch to primetime.

untitled

SpaceShipTwo, Now a Reality.

If there’s one thing that gets me excited it is seeing news about space that makes it to Australian TV. I don’t watch that much television normally but I do catch the morning news before I head off to work for the day. So you can imagine my surprise when none other than Sir Richard Branson appeared on my TV showing off Burt Rutan’s latest creation of SpaceShipTwo. Whilst we’d known about White Knight Two for some time (and saw a couple videos of it flying around the place) the critical component was always missing. Today Sir Branson unveiled SpaceShipTwo for the first time, and it really couldn’t come any sooner:

MOJAVE, Calif. – It has been pre-sold as an “out of this world premiere” – and you can’t get more off-world than unveiling a spaceliner built to whisk customers to the edge of space.

SpaceShipTwo is making its debut here at about 8:30 p.m. or 9 p.m. ET (5:30 – 6 p.m. PT) today. The super-slick looking rocket plane will be showcased as the world’s first passenger-carrying commercial spacecraft. The enterprise is under the financial wing of well-heeled U.K. billionaire and adventurer, Sir Richard Branson.

Branson created Virgin Galactic – billed as the world’s first commercial spaceline.

While there are few images of the completed craft floating around I can say that the short tour they did of SpaceShipTwo on the news this morning was spectacular. The renders it seems must have been pulled from the design files because the completed craft is almost identical to it’s 3D representation. Branson apparently wasn’t allowed to show the inside of the craft (due to FAA regulations apparently) but I’d also hazard a guess that the internals weren’t fully completed. The windows, for example, were completely blacked (well to be honest they looked painted on) out leading me to believe that they haven’t certified them yet. Still the first test flight is scheduled for tomorrow meaning that the majority of the flight hardware is in there, a significant milestone indeed.

Branson also let loose a few other interesting details. Firstly the next 18 months will be spent testing and verifying the craft’s capabilities. If this is going to be anything like the SpaceShipOne program they’ll do around 20 test flights the majority of which will be verifying the aerodynamic characteristics of the craft with about a quarter of them being powered flights to test the rocket engine and feathering system. 18 months is a fairly aggressive timeline for verification of a new craft but they’ve done this before so much of the groundwork is already laid, they just need to prove it will be safe enough for their paying customers.

And therein is the second tidbit of information that Branson let slip. There are already 300 customers who have paid the US$200,000 price tag for a flight and several thousand who paid a security deposit to secure a flight at a later date. Branson said several times that their main concern was reducing the costs to make space travel far more accessible and by the looks of it he has no shortage of early adopters who are willing to foot the bill. To really put his money where his mouth is his whole family will be going up on board the very first commercial flight, cementing his rhetoric in everyone’s minds.

Another fact which piqued my interest was the possibility of tiered flights. You see many people of varying age groups are going to want to use this service and they all have different physical capabilties. We all have an innate limit of how many g-forces we can take before blacking out and the comfort zone is well below that. For the majority of us the GLOC is between 4~6gs however this can be alleviated in 3 ways: changing the way you sit (like an astronaut laying down), applying the force gradually and wearing a special suit. Branson has mentioned all three of these characteristics before however today he mentioned that older people and those with medical conditions would still be able to fly aboard SpaceShipTwo however they wouldn’t be sent as high into space limiting any strain on their bodies. Its an interesting idea and definitely increases his potential market, but we’ll have to see how it pans out.

With this announcement Virgin Galactic has stepped out from the vaporware shadows that everyone had relegated them to. It’s an exciting time for commercial space travel with people like Branson creating the buzz that companies like SpaceX will be able to ride once their crafts are ready for human endeavours. The next 10 years are going to be extremely interesting as we see the rise of the new adventurers who dare to explore this final frontier.

Note: Just as I was about to hit the publish button on this article a friend of mine sent me this picture from the Space Fellowship:

untitled

Absolutely magnificent.

My thanks to Danne for sending me that pic! :D

Google DNS: Oh How Deliciously Devious.

One of my long time friends (and now work colleague) had a fantastic question to throw at people in interviews to see how they’d fair. It was in a category of questions that I’ve come to know as the “flail” type. There’s no real right answer to it and that’s the point, they’re designed to put you on the spot and see how you deal with it. The interviews he used this in was for a web administrator position and the question was simply: What is the Internet? Now you’ll get many wide and varied answers to that depending on the person’s background and level of expertise. At the same time you get to see their thought processes in motion, something which is invaluable when you’re hiring someone to deal with any and all of the obscure problems a high traffic web site can have.

Any good answer to this question should include at least a passing reference to the Domain Name System (DNS) which is responsible for translating human readable web addresses (like www.therefinedgeek.com.au) into machine readable numbers (150.101.112.123). Hosting a service like DNS is no small feat and as such only ISPs and some of the larger companies and government organisations. Google, who it seems won’t be satisfied until the Internet is renamed after them, have decided to offer up a free public DNS service to the world at large:

Today, as part of our ongoing effort to make the web faster, we’re launching our own public DNS resolver called Google Public DNS, and we invite you to try it out.

The average Internet user ends up performing hundreds of DNS lookups each day, and some complex pages require multiple DNS lookups before they start loading. This can slow down the browsing experience. Our research has shown that speed matters to Internet users, so over the past several months our engineers have been working to make improvements to our public DNS resolver to make users’ web-surfing experiences faster, safer and more reliable. You can read about the specific technical improvements we’ve made in our product documentation and get installation instructions from our product website.

Now the first thing that popped into my head when I thought this was that Google was basically saying “Hey, here’s another awesome free service” while holding back on the fine print of “we’re using this to make our advertising networks more desirable/profitable”. Indeed many of Google’s services track your usage of them and other applications whilst they’re running which is then data mined for all sorts of good stuff, usually around targeting advertising better. You really didn’t think all of Google’s stuff was free because they’re just nice guys did you?

However this doesn’t appear to be the case for the Google DNS service. Checking out their privacy policy reveals no direct links to their Adsense or Adword programs, nothing on data mining apart from that done to improve the service and overall the majority of the data is chucked out about 2 weeks after they gather it. I’m in 2 minds about this, the first being that internally they knew people would think this. Indeed this is supported by the amount of documentation they released right off the bat saying they’re not. If they did use this to augment their other services they would’ve been fighting a PR nightmare for a long time which would ultimately kill the service (something I’m sure they’d like to avoid). The second is that they’re forcing the hand of others to get off their hands and implement new features, like DNSSEC.

There’s been an increasing amount of talk about getting DNSSEC implemented on all the root servers. The original plan was to do one root server each month from December 1st until they were all completed. In my searches I haven’t actually come across any confirmation that this actually occurred (and my network knowledge is a bit lacking in the ability to actually check it) so Google might just be trying to show them how its done in the hopes they’ll pick up their game. Granted their service is not a root server and is non-authoritative for all domains it doesn’t host, but they’ve definitely shown its possible to implement such a system sooner rather than later.

The one thing that’s got me on tenterhooks about this is the fact that at their whim Google can change their terms of use for this service, opening up the data mine that they’ve cautiously stayed away from. It also makes me wonder if they might’ve had some connection with the L root server identity theft that happened at the start of last year since anyone malicious would’ve used that to do a whopping great deal of cache poisoning instead of providing a real DNS service. I’ll grant that’s a real stretch of the imagination, but Google was completely capable of performing such a feat.

In the end I’m still not going to use their service simply because to me, the end user, there’s no appreciable difference. Sure my queries might resolve a bit faster and be immune to some of the more exotic DNS attacks but as an Australian with not so spectacular Internet and relatively good internal security the cost of changing is greater than the benefit. Still kudos for Google for providing yet another free service.

Congress, Get Your Hands Off NASA!

One thing that’s guaranteed to get me going is the US congress meddling around in NASA’s affairs. They have enough internal troubles as it is without congress getting involved and trying to force them in a certain direction. Sure I can understand that the US wants results for their money and therefore feels they should be able to control their activities but with them investing  only 0.55% of their total budget in the program you can see why I get all hot under the collar when they’re targeted for reductions in spending. In fact the US Defence Force’s spending on space exceeds that of NASA’s budget by a fair margin (it was $22 billion 3 years ago) which just makes it even more ludicrous the amount of meddling congress does in NASA’s affairs.

If you’re wondering what’s spurred this rant it was this particular piece of news that opened up the old wounds of congress sticking its nose in where its not wanted:

Of the $400 million in ARRA funds Congress designated for space exploration projects, NASA initially planned to spend $150 million on competitively awarded projects meant to seed the development of commercial space transportation systems capable of ferrying astronauts to low Earth orbit.

“These efforts are intended to foster entrepreneurial activity leading to job growth in engineering, analysis, design and research, and to economic growth as capabilities for new markets are created,” NASA explained in a commercial crew and cargo white paper it sent Congress in May.

But House and Senate lawmakers told NASA to reduce the amount for commercial crew and cargo development to $90 million.

NASA was fortunate enough to get a small piece of the stimulus pie (about $1 billion total) which it’s spent about half of so far. Much of that went to the Constellation program in the hopes to keep the development on track for a 2015 debut launch. The problem is however that even if they do make that deadline there’s still a 4 year gap where the US won’t have any capability to deliver astronauts to LEO, the ISS and beyond. There are 2 schools of thought as to how they’re going to bridge the gap: the first being for them to continue their arrangements with Russia using Soyuz (although they’re already tapped out) and the second using commercially available solutions. Congress it seems has decided that the second is not worth the money and the first is not particularly feasible.

Seriously, what were they thinking?

The COTS program was a brilliant idea and it has definitely help spur companies like SpaceX forward. Injecting additional cash into these companies would see the development of fully private manned spacecraft accelerated and would thus close the launch gap that NASA is doomed to suffer. I’m not exactly sure what the congress critters have in mind when it comes to NASA but I guess I shouldn’t be surprised. Their dealings with them, apart from the initial inception for the space race, have always been rather awkward and short sighted.

This also affects their involvement with the International Space Station. They’ve stated in the past that they’re not interested in continuing their support of the ISS past 2015 drastically reducing the ROI on the project (it was slated for 10 years at full functionality, retirement in 2015 would reduce that to 5). Russia however has said that they’re quite happy for the US to detach their modules as they’ll keep maintaining their sections of the craft. They have extensive experience in long term station maintenance so its no wonder they want to keep their investment for as long as possible. The US however seems willing to ditch all their investment in the project without further consideration.

The reason that this is such a big deal is that the other big partners in the ISS, namely the ESA and JAXA, have to rework their schedules in accordance with the US decisions. They’re just cargo services at the moment but even those sort of missions require extensive planning, you can’t just whip up a HTV or ATV in a couple weeks. In fact they’ve begun to put pressure on the US to make a decision about the matter, but it’s still all up in the air.

Really the heart of the problem here is the giant bureaucracy that plagues both NASA and the US congress. SpaceX has proven that they can develop a launch capability with a team of hundreds, not thousands. They’ve also demonstrated that they can recover from launch aborts in a matter of hours, not days. This can all be easily attributed to the fact that they run with a minimal set of red tape and congress’ decision to funnel money away from companies as capable as they are is just unfathomably stupid.

For the negligible expenditure that NASA costs the US I am always confounded by how many people still think its a waste of money. If it wasn’t for NASA you wouldn’t have GPS, satellite television and programmable pacemakers. It would be nice if congress could get their hands out of their business for a while so that NASA can properly define their objectives and hopefully get itself back on track. I’d also love to funnel another 1% of GDP into them in order to develop things like a moon base but I know that’s never going to happen.

Maybe I should start my own nation, with space rockets and SCIENCE. ;)

Refused Classification, Now a Marketing Strategy?

It seems that the Australian Classification Board doesn’t mind serving me up with blog fodder every couple weeks so I can harp on about how the mature Australian gaming community needs a R18+ rating. Whilst I won’t re-iterate the point I’ve made time and time again about how having different standards for one single type of media is just silly it does seem that there might be another side to this whole R18+ debacle that no one has considered. It’s an exceedingly good way to get press for your otherwise unknown game:

The Classification Board has stated that “drug use related to incentives or rewards” is the reason why gangster-themed MMO Crimecraft has been refused classification in Australia.According to the Board’s report obtained by Kotaku this afternoon, Crimecraft “contains the option to manufacture, trade and self-administer legal “medicines” and illegal “boosts”… Boosts are sometimes referred to as “drugs” both in the game and in the Applicant’s submissions to the Board.”

One type of boost is called Anabolics, which the Board notes “is named after a class of proscribed drugs and that the Applicant describes boosts as “like real-life steroids”. In addition, the names of boosts mimic the chemical and colloquial names of proscribed drugs.”

I’d never heard of this game until it got refused classification from the ACB and a quick Internet rundown on them gives only 5 articles on Kotaku and an extremely sparse Wikipedia article. This is a woeful amount of press for what is supposed to be a MMORPG even one with such niche appeal as this. What makes this interesting is that the refusal for classification is scaringly similar to the one that hit Fallout 3 almost 18 months ago and one they subsequently got around without too much hassle. So why would you submit a game with the potential to hit the classification tripwire when you know the workaround? You can see why I’m smelling a PR stunt here.

Granted this is a bit of a stretch and it is entirely possible that they thought there was no issue with the names they used. Still when Fallout had to change the name of morphine to Med-X you can be guaranteed that using any real world names for drugs that are used in a game is a sure fire way to rile up the ACB. I’m sure that they’re going to resubmit with the modifications required but the fact remains, they managed to generate quite a bit of hype for their game which would’ve probably gone unnoticed in Australia otherwise. It is a rather niche game so they were probably relying on the open beta to generate most of the buzz for them (which coincidentally was not open and only available to the US and Canada) and you’d have to do a bit of digging to find any original reports of people actually playing the game.

Making and marketing a game, especially in a genre dominated by Blizzard, is no easy feat and I can easily see the developers agreeing to such a stunt in the hopes it would generate a bit more buzz. There’s no real ethical issues to speak of here but I still can’t help but feel that employing such a tactic is a bit, well cheap. Simple things like a fully fleshed out Wikipedia article, a YouTube channel and a corporate Twitter account do wonders for promoting a game that would otherwise slip under the radar. CrimeWars doesn’t appear to have any of these so its far more likely that this was a genuine submission rather than an attempt at free PR.

After all this though I’m still not interested in the game but I’ve got a feeling I’m not really in their demographic. Still I have to wonder just what their demographic is with such a game because traditionally people who are playing games like that (GTA, Saints Row) aren’t the MMO or PC game type. Its still quite possible that they’ll find their niche but when giants like WAR and AOC managed to fall flat in the MMO market I must say, I have my doubts.

Still, they gave me something to write about and that’s well….something I guess :)

Powershell: Why Did I Resist?

A good deal of any system administrators job is automation. Even when you’re working in small environments doing the same thing on every user’s machine individually is needlessly tiresome and always error prone. My current environment has well over 400 servers and at least 1000 desktops so anything that needs to touch all of them has to be automated, there’s just no other option. In the past VBScript was the be all and end all of Windows based scripting and is still used as the de facto automation language for many IT shops today. However with the coming of Vista and Server 2008 we saw the introduction of a plucky new tool called Powershell (first seen in the wild in 2006) which looked to be the next greatest thing for automating your IT environment. Due to Vista’s poor reception and by association Server 2008 Powershell didn’t really take off that well. In fact I’d actively ignored it up until about 6 months ago when I started looking at it more closely as a tool to automate some VMware tasks. Little did I know then that this new world of Powershell would soon make up the majority of my day to day work.

Now the developers out there will know that Visual Basic (VB) is somewhat of a beginner’s programming language. Sure it’s feature complete when compared to its bigger brother C# however it’s rather lax with its standards and this makes any code done in VB rather inelegant. This was probably why I shied away from Powershell initially as I thought it would just be an evolutionary step from VBScript, but I couldn’t have been more wrong. The syntax is decidedly closer to C# than VB although the legacy of behind the scenes tricks to hide some complexities from its users is still there, although with the added benefit of those small tricks being available should you know where to look. Additionally the ease of integration with other Microsoft coding platforms (like loading .NET dlls) is absolutely amazing, giving you the power of doing almost anything you can with their other languages right there in your script.

The real kicker though is the shift in focus that Microsoft has taken when it implemented Powershell all those years ago. Typically their infrastructure products like Exchange or System Center were either built by separate teams or came from another company that Microsoft had purchased. This meant that there was no standard way of interfacing with these products making automation a real pain, usually ending up with you having to use a third party tool or write reams of VBScript. For most future releases however Microsoft has built their management tools on top of Powershell, meaning that any action performed in the management consoles can be replicated via a script. This was most obvious when they released Exchange 2007 and any command you performed on the GUI would show you the Powershell command that it ran.

To show you how much you can do with Powershell I’m going to include 2 of my own scripts which I invested quite a bit of time in. The first shown below is a script that will scan your domain and alert you when someone adds themselves to the Domain Administrators group:

$domainAdmins = dsget group “CN=Domain Admins,CN=Users,DC=your,DC=domain,DC=com” -members -expand
$list = Get-Content C:\Directory\Where\Script\Runs\DomainAdminsList.txt

$mail = new-object System.Net.Mail.MailMessage
$mail.From = new-object System.Net.Mail.MailAddress(“EmailTo@SendFrom.com“)
$mail.To.Add(“AddressTo@SendTo.com“)
$smtpserver = “YourSMTPServer
$mail.Subject = “Unauthorized Domain Administrator Priveleges Detected.”
$smtp = new-object System.Net.Mail.SmtpClient($smtpserver)

foreach ($domainAdmin in $domainAdmins)
{
$found = $false
foreach ($line in $list)
{
if ($domainAdmin -eq $line){$found = $true}
}

if ($domainAdmin -eq “”){$found = $true}

if($found){}
else
{
$date = Get-Date
$hostname = hostname
Write-Host $domainAdmin “not found in control file.”
$mail.Body = $domainAdmin + ” not found in control file. Script run on ” + $hostname +” at ” + $date + ” using control file C:\Directory\Where\Script\Runs\DomainAdminsList.txt
$smtp.Send($mail)
}
}

You’ll want to first run “dsget group “CN=Domain Admins,CN=Users,DC=your,DC=domain,DC=com” -members -expand | DomainAdminsList.txt” to generate the text file of domain admins. Once you’ve done that you can schedule this to run say every hour or so and you’ll get an email when someone gives an account domain administrator. You can modify this for any group to, just update the first line with the CN of the group you want to scan.

The second is one that I’m quite proud of, it will tell you when someone changes a group policy in your domain. Pretty handy for when you’ve got a bunch of developers who have access to do that and routinely break other people’s systems when they do. You’ll need to grab the ListAllGPOs.ps1 script from here first (although I called it GPOList.ps1):

$GPOs = .\GPOList.ps1 -query -verbose -domain your.domain.com

$DCs = “DC01″,”DC02″

$baseline = Import-Csv GPOBaseline.csv

$outFile = “D:\Apps\Scripts\GPOScanner\Output.txt”
$outBody = “D:\Apps\Scripts\GPOScanner\OutBody.txt”
$null | Out-File $outFile
$null | Out-File $outBody
$emailRequired = $false

Write-Host “Scanning your.domain.com
your.domain.com” | Out-File $outFile -append
foreach ($cGPO in $devGPOs)
{
$found = $false
foreach ($bGPO in $baseline)
{
if ($cGPO.ID -match $bGPO.ID)
{
$found = $true

if ($bGPO.ModificationTime.Equals($cGPO.ModificationTime.ToString()))
{}
else
{
$output = “WARNING: GPO ” + $cGPO.Displayname + ” has been modified since baseline.”
Write-Host $output
$output | Out-File $outBody -append
$output = “Modification time: ” + $cGPO.ModificationTime + “”
Write-Host $output
$output | Out-File $outBody -append
$emailRequired = $true

$cGPO.ModificationTime.AddSeconds(1).ToString()
foreach ($dc in $DCs)
{
$dc
$logs = [System.Diagnostics.EventLog]::GetEventLogs($dc)
foreach($log in $logs)
{
if($log.LogDisplayName -eq “Security”)
{
$entries = $log.Entries
foreach($entry in $entries)
{
if ($entry.EventID.Equals(4663) -or $entry.EventID.Equals(4656) -or $entry.EventID.Equals(560))
{
if ($entry.Message.Contains($cGPO.ID))
{
$entry | fl
$entry | fl | Out-File $outFile -append
}
}
}
}
}
}
}
}
}

if ($found -eq $false)
{
$emailRequired = $true
$output = “New GPO ” + $cGPO.DisplayName + ” not found in baseline.”
Write-Host $output
$output | Out-File $outBody -append
}
}

if ($emailRequired)
{
$hostname = hostname
$date = Get-Date
$output = “Script was run on ” + $hostname + ” at ” + $date + ” using control files located in D:\Apps\Scripts\GPOScanner. Please see the attachment for related event log information.”
$output | Out-File $outBody -append
$mail = new-object System.Net.Mail.MailMessage
$mail.From = new-object System.Net.Mail.MailAddress(“EmailTo@SendFrom.com“)
$mail.To.Add(“EmailTo@SendTo.com”)
$smtpserver = “YourSMTPServer
$mail.Subject = “Group Policy Changes Detected.”
$smtp = new-object System.Net.Mail.SmtpClient($smtpserver)
$mail.Body = Get-Content $outBody
$att = new-object Net.Mail.Attachment($outFile)
$mail.Attachments.Add($att)
$smtp.Send($mail)
$att.Dispose()
}

Again you’ll want to run “.\GPOList.ps1 -query -verbose -domain your.domain.com | Export-Csv GPOBaseline.csv” to generate the baseline. This script will first look for any changes then scour the security logs of your domain controllers to find who did it, sending you the logs of who changed it and when. Pretty neat eh?

$GPOs = .\GPOList.ps1 -query -verbose -domain your.domain.com

$DCs = “DC01″,”DC02″

$baseline = Import-Csv CENTRALGPOBaseline.csv

$outFile = “D:\Apps\Scripts\GPOScanner\Output.txt”
$outBody = “D:\Apps\Scripts\GPOScanner\OutBody.txt”
$null | Out-File $outFile
$null | Out-File $outBody
$emailRequired = $false

Write-Host “Scanning your.domain.com”
“your.domain.com” | Out-File $outFile -append
foreach ($cGPO in $devGPOs)
{
$found = $false
foreach ($bGPO in $baseline)
{
if ($cGPO.ID -match $bGPO.ID)
{
$found = $true

if ($bGPO.ModificationTime.Equals($cGPO.ModificationTime.ToString()))
{}
else
{
$output = “WARNING: GPO ” + $cGPO.Displayname + ” has been modified since baseline.”
Write-Host $output
$output | Out-File $outBody -append
$output = “Modification time: ” + $cGPO.ModificationTime + “”
Write-Host $output
$output | Out-File $outBody -append
$emailRequired = $true

$cGPO.ModificationTime.AddSeconds(1).ToString()
foreach ($dc in $DCs)
{
$dc
$logs = [System.Diagnostics.EventLog]::GetEventLogs($dc)
foreach($log in $logs)
{
if($log.LogDisplayName -eq “Security”)
{
$entries = $log.Entries
foreach($entry in $entries)
{
if ($entry.EventID.Equals(4663) -or $entry.EventID.Equals(4656) -or $entry.EventID.Equals(560))
{
if ($entry.Message.Contains($cGPO.ID))
{
$entry | fl
$entry | fl | Out-File $outFile -append
}
}
}
}
}
}
}
}
}

if ($found -eq $false)
{
$emailRequired = $true
$output = “New GPO ” + $cGPO.DisplayName + ” not found in baseline.”
Write-Host $output
$output | Out-File $outBody -append
}
}

if ($emailRequired)
{
$hostname = hostname
$date = Get-Date
$output = “Script was run on ” + $hostname + ” at ” + $date + ” using control files located in D:\Apps\Scripts\GPOScanner. Please see the attachment for related event log information.”
$output | Out-File $outBody -append
$mail = new-object System.Net.Mail.MailMessage
$mail.From = new-object System.Net.Mail.MailAddress(“EmailTo@SendFrom.com”)
$mail.To.Add(“EmailTo@SendTo.com”)
$smtpserver = “YourSMTPServer”
$mail.Subject = “Group Policy Changes Detected.”
$smtp = new-object System.Net.Mail.SmtpClient($smtpserver)
$mail.Body = Get-Content $outBody
$att = new-object Net.Mail.Attachment($outFile)
$mail.Attachments.Add($att)
$smtp.Send($mail)
$att.Dispose()
}

The Everyman’s Perception, The Engineer’s Problem.

One of my university lecturer’s had a reputation for talking for hours on end about his previous projects (Dr John Rayner if you’re interested). This wasn’t atypical of many of our lecturers since the majority of them had spent many decades in industry or research before becoming lecturer’s, but Dr Rayner was a curious exception to those who were just being a little nostalgic. He was a physicist turned engineer, which is strange because even though we share some common ground most of us would never think of “crossing the border” as it were. As such we routinely had him sub in when either of our physics or engineering teachers were absent and it was guaranteed that his class would somehow revolve around one of his previous projects. The twist was, even though we’d always think we were just wasting our time listening to him by the end we all understood the material we needed to be taught, even though he rarely delved into the theory required. One of the most interesting lessons we got from him was on the expectations of customers and how that will influence your designs.

He was working on a community housing project in one of the northern states and one of the concerns was water usage. They’d optimized basically everything apart from the toilets so it was left to him and his team to optimize the amount of water that they used. They had then designed a system that used around a tenth of the water of a conventional toilet, a considerable saving. However after passing initial testing (using an IEEE approved analogue for human waste, basically sausage skin filled with sawdust) they then sent them along for their real world exposure. Curiously whilst no one reported any problems actually using the toilet they weren’t well received. As it turns out the perception of so little water being used made most people feel uneasy about the toilets, thinking they hadn’t properly flushed or that they weren’t clean. Thus the design was reworked, although he was coy on the actual results.

This whole lesson came steaming back when I saw this article yesterday:

Researchers have demonstrated a prototype device that can rid hands, feet, or even underarms of bacteria, including the hospital superbug MRSA.

The device works by creating something called a plasma, which produces a cocktail of chemicals in air that kill bacteria but are harmless to skin.

The team says that an exposure to the plasma of only about 12 seconds reduces the incidence of bacteria, viruses, and fungi on hands by a factor of a million – a number that stands in sharp contrast to the several minutes hospital staff can take to wash using traditional soap and water.

The first thing that sprung to many people’s minds is how this could be used to eliminate the need for washing your hands. It’s an interesting idea since the use of this technology could be quite a bit more hygienic whilst saving water and towel waste. However whilst novel and indeed an elegant alternative it will take many years for such things to replace the norm, just because people won’t feel comfortable walking out of the toilet without washing their hands.

It’s a challenge that every engineer will face when they’re designing and building a new system. There are a lot of social and technical norms out there and going against them won’t do anything to help the adoption of your product. I think this is the problem that Google Wave has faced recently since it has melded so many different technologies (and therefore expectations of how it will function) that we’re no quite sure how to go about using it. The fact that it has no real physical analogue doesn’t help the matter either, and that’s why my Wave account sits unused for the better part of a month.

So it becomes the engineer’s challenge to understand the everyman and work with him, since they will become the ones using our creations. I used to look upon this as unnecessary rework but over time I grew to appreciate the familiarity that came with certain lines of products (thank you Microsoft ;)) making learning and utilizing them to their fullest so much easier. A good understanding of your users can be as valuable as a good understanding of the solution, and I’m forever thankful for the eccentric Dr Raynor for teaching me that.