I’m no conspiracy theorist, my feet are way too firmly planted in the world of testable observations to fall for that level of crazy, but I do love it when we the public get to see the inner workings of secretive programs, government or otherwise. Part of it is sheer voyeurism but if I’m truthful the things that really get me are the big technical projects, things that done without the veil of secrecy would be wondrous in their own right. The fact that they’re hidden from public view just adds to the intrigue, making you wonder why such things needed to be kept secret in the first place.
One of the first things that comes to mind was the HEXAGON series of spy satellites which were high resolution observation platforms launched during the cold war that still rival the resolution of satellites launched today. It’s no secret that all space fairing nations have fleets of satellites up there for such purposes but the fact that the USA was able to keep the exact nature of the entire program secret for so long is quite astounding. The technology behind it though was what really intrigued me as it really was years ahead of the curve in terms of capabilities, even if it didn’t have the longevity of its fully digital progeny.
Yesterday however a friend sent me this document from the Electronic Frontier Foundation which provides details on something called the Presidential Surveillance Program (PSP). I was instantly intrigued.
According to William Binney, a former head of the National Security Agency the PSP is in essence a massive data gathering program with possible intercepts at all major fibre terminations within the USA. The system simply siphons off all incoming and outgoing data which is then stored in massive, disparate data repositories. This in itself is a mind boggling endeavour as the amount of data that transits the Internet in a single day dwarfs the capacity of most large data centres. The NSA then ramps it up a notch by being able to recover files, emails and all sorts of other data based on keywords and pattern matching which implies heuristics on a level that’s just simply mind blowing. Of course this is all I’ve got to go on at the moment but the idea itself is quite intriguing.
For starters creating a network that’s able to handle a direct tap to a fibre connection is no small feat in itself. When the fibres terminating at the USA border are capable of speeds in the GB/s range the require infrastructure to handle that is non-trivial, especially so if you want to store that data later. Storing that amount of data is another matter entirely as most commercial arrays begin to tap out in the petabyte range. Binney’s claims start to seem a little far fetched here as he states there are plans up into the yottabyte range but concedes that current incarnations of the program couldn’t have more than tens of exabytes. Barring some major shake up in the way we store data I can’t fathom how they’d manage to create an array that big. Then again I don’t work for the NSA.
As intriguing as such a system might be there’s no question that its existence is a major violation of privacy for US citizens and the wider world. Such a system is akin to tapping every single phone and recording every conversation on it which is most definitely not supported by their current legal system. Just because they don’t use it until the have a reason to doesn’t make it just either as all data gathered without the suspicion of guilt or pretence to commit a crime is illegitimate. I could think of many legitimate uses for the data (anonymous analytical stuff could prove very useful) but the means by which its was gathered eliminates any purpose being legitimate.
In my travels through the USA I became intimately acquainted with their high level of airport security. Upon entering the country we were finger printed, photographed and grilled about what our trip was about. There was also the long lines for getting through the metal detectors and full body scanners, usually taking up a good 45 minutes of my time to get through. I was never chosen to go through the backscatter x-ray machines (nor did I see any of the newer millimetre wave ones) but I did see many people go through it. Most of them weren’t exactly what you’d call a security risk (mostly people in wheelchairs) but I knew exactly why those machines were there: to make everyone feel safer without actually being so.
This is what is referred to as security theatre. These scanners are supposedly better at detecting things that slip by metal detectors which they accomplish by using low-energy x-rays that penetrate through clothing. Solid objects then should become obvious and should something suspicious be identified the passenger can be taken aside for further searching. Trouble is the machines aren’t terribly effective at what they’re designed to do and the back-scatter x-ray type machines emit ionizing radiation (not a lot mind you, but there’s been minimal research done into them). Using them then seems like a pointless exercise and indeed even though they’ve been in operation in the USA for quite some time the jury is still out on whether they’re actually being effective or not.
So you can then imagine my surprise when I find out that we’ll be getting these scanners at all international airports in Australia:
PASSENGERS at airports across Australia will be forced to undergo full-body scans or be banned from flying under new laws to be introduced into Federal Parliament this week.
In a radical $28 million security overhaul, the scanners will be installed at all international airports from July and follows trials at Sydney and Melbourne in August and September last year.
The Government is touting the technology as the most advanced available, with the equipment able to detect metallic and non-metallic items beneath clothing.
Now we won’t be getting the dubious back-scatter style ones here instead we’ll have the newer millimetre wave ones that don’t emit ionizing radiation. That’s the only good news though as they’ve also amended the legislation that allows you to turn down things like this in favour of a pat down, with the penalty for refusing to go through one being that you’ll be barred from your flight. To top it all off the transport minister Anthony Albanese sealed it with this choice quote “I think the public understands that we live in a world where there are threats to our security and experience shows they want the peace of mind that comes with knowing government is doing all it can”.
It’s almost like he knows these things are a useless piece of security theatre, but is going ahead with them anyway.
More than a decade has past since the events of 11/9/2001 and we’ve yet to see a repeat, or an attempted repeat, of the events that led up to that tragedy here or overseas. The health and privacy concerns aside the reality is that these scanners don’t really accomplish what they’re designed to do and are thus just another inconvenience and waste of tax payer dollars. I can understand that there are some who will feel safer by seeing them there but that doesn’t change the facts that they’re just another piece of security theatre, and a costly one at that.
For over 100 years rights holders have resisted any changes to their business models brought about by changes in technology. From a business perspective its hard to blame them, I mean who wouldn’t do everything in their power to ensure you could keep making money, but history has shown that no matter how hard they fight it they will eventually lose out. Realistically the world has moved on and instead of attempting to keep the status quo rights holders should be looking for ways to exploit these new technologies to their advantage, not ignore them or try to legislate them away. Indeed if other industries followed suit you’d have laws preventing you from developing automated transport to save the buggy whip industry.
The copyright system that the USA employs is a great example of where legislation can go too far at the request of an industry failing to embrace change. At its inception the copyrights were much like patents: time limited exclusivity deals that enabled a creator to profit from their endeavours for a set period of time after which they would enter the public domain. This meant that as time went on there would be an ever growing collection of public knowledge that would benefit everyone and not just those who held the patent. However unlike the patent system copyrights in the USA have seen massive reform in the past, enough so that works that would have come into the public domain will probably never do so.
Thankfully, whilst the copyright system might be the product of an arms race between innovators and rights holders, that hasn’t stop innovation in the areas where the two meet. Most of this can be traced back to provisions made in the Digital Millennium Copyright Act (DMCA) that granted safe harbour to any site that relied on user generated content. In essence it put the burden of work on the rights holders themselves, requiring them to notify a site about infringing works. The site was then fully protected from legal action should they comply with the request, even if they restore the offending material after receiving a counter claim from the alleged offender. Many sites rely on this safe harbour in order to continue running on the web because the reverse, them policing copyright themselves, is both technically challenging and resource intensive.
However just like all the technologies and provisions that have been made for the rights holder industry previously those safe harbour provisions, which enabled many of the world’s top websites to flourish, are seen as a threat to their business models. Rights holders associations have said that the DMCA as it stands right now is too lenient and have lobbied for changes that would better support their business. This has come in the form of 2 recent bills that have dropped in both houses: the PROTECT IP Act (PIPA) in the senate and the Stop Online Piracy Act (SOPA) in the house of reps. Both of these bills have attracted heavy criticism from the technology and investment sectors and it’s easy to see why.
At their core the bills are essentially the same. Both of them look to strengthen the powers that rights holders have in pursuing copyright infringers whilst at the same time weakening the safe harbour provisions that were created under the DMCA. Additionally many of the mechanisms described in the bill are at odds with the way that the Internet is designed to work, breaking many of the ideals that were set out in order to ensure ubiquitous access. There’s also many civil liberty issues at stake here and whilst bill supporters have assured everyone that they don’t impact on them in any way the wording of the bill is vague enough to support both interpretations.
The main issue I and many others take with these bills is the shifting of the burden of proof (and thus responsibility) away from the rights holders and onto the web site owners. The changes SOPA advocates mean that web site administrators will be responsible for identifying copyrighted material and then removing it from their website, lest they fall prey to having their domain seized. Whilst this more than likely won’t be the downfall of the sites that made their fame inside the safe harbours of the DMCA it would have a chilling effect on start-ups looking to innovate in an area that would have anything to do with a rights holder group. Indeed it would be the sites that have limited resources that would be hit the hardest as patrolling for copyright infringement isn’t a fully automated process yet and the burden could be enough to drive them under.
It’s also evident that SOPA was put together rather haphazardly when some of the most known copyrights infringement sites, like The Pirate Bay, are actually immune to it. Indeed many sites that rights holders complain about aren’t covered by SOPA (just by the current laws which, from what I can tell, means they’re not going anywhere) and thus the bill will have little impact on their activities.
You might be wondering why I, an Australian who’s only ever been to the USA once, would care about something like SOPA. Disregarding for the moment the principle argument and the fact that I don’t want to see the USA technology sector die (I could justify my point easily with either) the unfortunate reality is that Australia has a rather liberal free trade agreement with the USA. What this means is that not only do we trade with them free of tariffs and duties but we’re also obliged to comply with their laws which affect trade. SOPA is one such bill and should it pass it’s highly likely that we’d be compelled to either implement a similar law ourselves or simply enforce theirs. Don’t think that would happen? A leaked letter from the American ambassador to Spain warned them that not passing a SOPA like bill would see them put on a trade blacklist effectively ending trade between the two countries. This is just another reason as to why everyone, not just Americans, should oppose SOPA in its current form.
The worst part of all of this is the potential for my site, the one I’ve been blogging on for over 3 years, to come under fire. I link to a whole bunch of different places and simply doing so could open me up to domain seizure, even if it wasn’t me putting the link there. I already have limited time to spend on here and the additional task of playing copyright police would surely have an impact on how often I could post and comment. I don’t want to stop writing and I don’t want people to stop commenting but SOPA has the very real potential to make both those activities untenable.
So what can be done about SOPA and its potential chilling effects on our Internet ecosystem? For starters if you’re an American citizen write your representative and tell them to oppose SOPA. If you’re not then the best you can do is help to raise awareness of this issue, as whilst it’s a big issue in the tech circles, even some of the most versed political pundits were unaware of SOPA’s existence until recently. Past that we just have to hope we’ve made enough of an impression on the USA congress critters so that the bill doesn’t pass, at least in its current form. The hard work of many people has made this a very public issue, but only continued pressure will make it so it won’t damage the Internet and the industries it now supports.
EDIT: It appears that the strong opposition has caused the American congress to shelve SOPA indefinitely. Count that as a win for sanity.
Maybe it’s the combination of mission secrecy and close resemblance to the now retired shuttle fleet but the X-37B seems to get more press than any other space craft currently flying in orbit. When I first saw the diminutive shuttle cousin back in April last year I figured it was just a unique experiment that the Department of Defence was carrying out and the rumours about it’s satellite capturing capabilities were greatly exaggerated. Indeed towards the end of the mission I investigated the idea that it was already performing such a task but based on the current trajectories of other satellites it didn’t seem like that was the case. The X-37B blasted off once again at the start of this year again shrouded in mystery as to what its actual mission was and it’s been up there ever since.
That last fact is interesting as the X-37B’s stated capabilities put it at being on-orbit for a maximum of 270 days. The deadline for its return to earth would have then been around November 30th, a date that has well past now. The United States Air Force has stated that its mission has been extended and it should be on orbit for a while to come. This is interesting because it tells us that the X-37B is a lot more capable than they’ve state it is. Whilst this could just be good old fashioned American over-engineering it does lead some credence to the theories that there’s a whole bunch of capabilities hidden within the X-37B that aren’t officially there.
What’s been really interesting however are the discussions surrounding a potential manned variant of the X-37B. As it stands the X-37B is quite a small craft, measuring a mere 10 meters long and a payload bay that’s only got a few cubic meters of storage space. Overall its very similar in size to the Soyuz craft so there’s definitely some potential for it to be converted. Rumour has it that the X-37B would be elongated significantly though, bringing its total length up to 14 metres with enough space to sit 7 astronauts. Granted it wouldn’t be as roomy as the shuttle was (nor could it deliver non-crew payloads at the same time) but it would be a quick path to restoring the USA’s manned flight capability. That would hinge on the man rating Atlas V launch system which is currently under investigation.
It’s not just space nuts that are getting all aflutter about the X-37B either. China has expressed concerns that the X-37B could be used as a orbital weapons delivery system. The secrecy surrounding the actual mission profiles that the X-37B has been flying is probably what has prompted these concerns and it being under the sole purview of the Air Force doesn’t help matters. In all honesty I doubt the X-37B would be used as a weapons platform since it’s more of a generalist/reconnaissance craft than a weapons platform. If there’s someone you want to worry about launching weapons into orbit it would be the Russians as they are the only (confirmed) nation to have launched armed craft. A dedicated weapons platform would also look nothing like the X-37B, especially if it was going to be designed for on-orbit combat (who needs wings in space?).
The next couple months will give us some more insight as to the true purpose of the X-37B. It’s quite likely that these first couple flights have just been complete shake downs of all the systems that make up the X-37B with the first flight being orbital manoeuvring verification and the current flight being an endurance test. Should it stay up there for a significant amount of time it’s more likely that it’s some form of advanced reconnaissance craft rather than something crazy like a satellite capturer or orbital weapons platform. The prospect of a manned variant is quite exciting and I’ll be waiting anxiously to see if the USA pursues that as an option.
There’s a saying amongst the space enthusiast community that the shuttle only continued on for so long in order to build the International Space Station and the ISS only existed so that the shuttle had some place to go. Indeed for the last 13 years of the shuttle program it pretty much exclusively visited the ISS taking only a few missions elsewhere, usually to service the Hubble Space Telescope. With the shuttle now retired many are looking now looking towards the future of the ISS and the various manned space programs that have contributed to its creation. It’s now looking very likely that the ISS will face the same fate as Mir did before it, but there are a multitude of possibilities of what could be done instead.
Originally the ISS was slated for decommission in 2016 and with it still not being fully constructed (it is expected to be finished by next year) that would give it a full useful life of only 4 years. The deadline was extended back in 2009 to 2020 in order to more closely match the designed functional lifetime of 7 years and hopefully recoup some of the massive investment that has gone into it. It was a good move and many of the ISS components are designed to last well beyond that deadline (especially the Russian ones which can be refurbished on orbit) and there’s still plenty of science that can be done using it as a platform.
The ISS, like Mir before it, has only one option for retirement: a fiery plunge through the atmosphere into a watery grave. Whilst there’s been lots of talk of boosting it up to a higher orbit, sending it to the moon or even using it as an interplanetary craft all these ideas are simply infeasible. The ISS was designed and built to be stuck in low earth orbit its entire life with many assumptions made that preclude it from going any further. It lacks the proper shielding to go any higher than say the Hubble Space Telescope and the structure is too weak to withstand the required amount of thrust that would get it to a transit orbit (at least in any reasonable time frame). The modifications required to make such ideas feasible would be akin to rebuilding the entire station again and thus to avoid cluttering up the already cluttered area of low earth orbit it must be sent back down to earth.
Russia however has expressed interest in keeping at least some of the parts of the ISS in orbit past the 2020 deadline. It appears they want to use them as a base for their next generation space station OPSEK. This space station would differ significantly from all the previous space stations in that it would be focused on deep space exploration activities rather than direct science like its predecessors were. It would seem that those plans have hit some roadblocks as the Russian Federal Space Agency has recently stated that the ISS will need to be de-orbited at the end of its life. Of course there’s still a good 8 years to go before this will happen and the space game could change completely between now and then, thanks in part to China and the private space industry.
China has tried to be part of the ISS project in the past but has usually faced strong opposition from the USA. So strong was the opposition that they have now started their own independent manned space program with an eye to set up their own permanent space station called Tiangong. China has already succeeded in putting several people into space and even successfully conducted an extravehicular activity (EVA), showing that they have much of the needed technology to build and maintain a presence in space. Coincidentally much of their technology was imported from Russia meaning that their craft are technically capable of docking with the Russian segments of the ISS. That’s also good news for Russia as well as their Soyuz craft could provide transport services to Tiangong in the future.
Private space companies are also changing the space ecosystem significantly, both in regards to transport costs and providing services in space. SpaceX has just been approved to roll up two of its demonstration missions to the ISS which means that the next Dragon capsule will actually end up docking with the ISS. Should this prove successful SpaceX would then begin flying routine cargo missions to the ISS and man rating of their capsule would begin in earnest. Couple this with Bigelow Aerospace gearing up to launch their next inflatable space habitat in 2014~2015 the possibility of the ISS being re-purposed by private industry becomes a possible (if slightly far fetched) idea.
The next decade is definitely going to be one of the most fascinating ones for space technologies. The power international power dynamic is shifting considerably with super powers giving way to private industry and new players wowing the world stage with the capabilities. We may not have a definitive future for the ISS but its creation and continued use has provided much of the ground work necessary to flag in the next era of space.
It’s nigh on impossible to make a system completely secure from outside threats, especially if it’s going to be available to the general public. Still there are certain measures you can take that will make it a lot harder for a would be attacker to get at your users’ private data, which is usually enough for them to give up and move onto another more vulnerable target. However, as my previous posts on the matters of security have shown, many companies (especially start ups) eschew security in favor of working on new features or improving user experience. This might help in the short term to get users in the door, but you run the very real risk of being compromised by a malicious attacker.
The attacker might not even be entirely malicious, as what appears to be the case with one of the newest hacker groups who are calling themselves LulzSec. There’s a lot of speculation as to who they actually are but their Twitter alludes to the fact that they were originally part of Anonymous, but decided to leave them since they disagreed with the targets they were going after and were more in it for lulz than anything else. Their targets range drastically from banks to game companies and even the USA senate with the causes changing just as wildly, ranging from simply for the fun of it to retaliations for wrong doings by corporations and politicians. It would be easy to brand them as anarchists just out to cause trouble for the reaction, but some of their handiwork has exposed some serious vulnerabilities in what should have been very secure web services.
One of their recent attacks compromised more than 200,000 Citibank accounts using the online banking system. The attack was nothing sophisticated (although authorities seem to be spinning it as such) with the attackers gaining access by simply changing the identifying URL and then automating the process of downloading all the information they could. In essence Citibank’s system wasn’t verifying that the user accessing a particular URL was authorized to do so, it would be like logging onto Twitter and then typing say Ashton Kutcher’s account name into the URL bar and then being able to send tweets on his behalf. It’s basic authorization at its most fundamental level and LulzSec shouldn’t have been able to exploit such a rudimentary security hole.
There are many other examples of LulzSec hacking various other organisations with the latest round of them all being games development companies. This has drawn the ire of many gamers which just spurred them on to attack even more game and related media outlets just so they could watch the reaction. Whilst it’s kind of hard to take the line of “if you ignore them they’ll go away” when they’re unleashing a DDoS or downloading your users data the attention that’s been lavished on them by the press and butthurt gamers alike is exactly what they’re after, and yes I do get the irony of mentioning that :P. Still had they not been catapulted to Internet stardom so quickly I can’t imagine that they would continue being as brash as they are now, although there is the possibility they might have started out doing even more malicious attacks in order to get attention.
Realistically though the companies that are getting compromised by rudimentary URL and SQL injection attacks only have themselves to blame since these are the most basic security issues that have well known solutions and shouldn’t pose a risk to them. Nintendo showed that they could withstand an attack without any disruptions or loss of sensitive data and LulzSec was quick to post the security hole and then move onto to more lulzy pastures. The DDoSing of others though is a bit more troublesome to deal with, however there are many services (some of them even free) that are designed to mitigate the impact of such an incident. So whilst LulzSec might be a right pain in the backside for many companies and consumers alike their impact would be greatly softened by a strengthening of security at the most rudimentary level and perhaps giving them just a little less attention when they do manage to break through.
I remember getting my first ever phone with a data plan. It was 3 years ago and I remember looking through nearly every carrier’s offerings to see where I could get the best deal. I wasn’t going to get a contract since I change my phone at least once a year (thank you FBT exemption) and I was going to buy the handset outright, so many of the bundle deals going at the time weren’t available to me. I eventually settled on 3 mobile as they had the best of both worlds in terms of plan cost and data, totaling a mere $40/month for $150 worth of calls and 1GB of data. Still when I was talking to them about how the usage was calculated I seemed to hit a nerve over certain use cases.
Now I’m not a big user of mobile data despite my daily consumption of web services on my mobile devices, usually averaging about 200MB/month. Still there have been times that I’ve really needed the extra capacity like when I’m away and need an Internet connection for my laptop. Of course tethering the two devices together doesn’t take much effort at all, my first phone only needed a driver for it to work, and as far as I could tell the requests would look like they were coming directly from my phone. However the sales representatives told me in no uncertain terms that I’d have to get a separate data plan if I wanted to tether my handset or if I dared to plug my sim card into a 3G modem.
Of course upon testing these restrictions I found them to be patently false.
Now it could’ve just been misinformed sales people who got mixed up when I told them what I was planning to do with my new data enabled phone but the idea that tethered Internet usage is somehow different to normal Internet usage wasn’t a new idea to me. In the USA pretty much every carrier will charge you a premium on top of whatever plan you’ve got if you want to tether it to another device, usually providing a special application that enables the functionality. Of course this has spurred people to develop applications that circumvent these restrictions on all the major smart phone platforms (iOS users will have to jailbreak unfortunately) and the carriers aren’t able to tell the difference. But that hasn’t stopped them from taking action against those who would thwart their juicy revenue streams.
Most recently it seems that the carriers have been putting pressure on Google to remove tethering applications from the Android app store:
It seems a few American carriers have started working with Google to disable access to tethering apps in the Android Market in recent weeks, ostensibly because they make it easier for users to circumvent the official tethering capabilities offered on many recent smartphones — capabilities that carry a plan surcharge. Sure, it’s a shame that they’re doing it, but from Verizon’s perspective, it’s all about protecting revenue — business as usual. It’s Google’s role in this soap opera that’s a cause for greater concern.
Whilst this is another unfortunate sign that no matter how hard Google tries to be “open” it will still be at the mercy of the carriers their banning of tethering apps sets a worrying precedent for carriers looking to control the Android platform. Sure they already had a pretty good level of control over it since they all release their own custom versions of Android for handsets on their network but now they’re also exerting pressure over the one part that was ostensibly never meant to be influenced by them. I can understand that they’re just trying to protect their bottom line but the question has to be asked: is tethering really that much of a big deal for them?
It could be that my view is skewed by the Australian way of doing things, where data caps are the norm and the term “unlimited” is either a scam or at dial-up level speeds. Still from what I’ve seen of the USA market many wireless data plans come with caps anyway so the bandwidth argument is out the window. Tethering to a device requires no intervention from the carrier and there are free applications available on nearly every platform that provide the required functionality. In essence the carriers are charging you for a feature that should be free and are now strong-arming Google into protecting their bottom lines.
I’m thankful that this isn’t the norm here in Australia yet but we have an unhealthy habit of imitating our friends in the USA so you can see why this kind of behavior concerns me. Since I’m also a firm believer in the idea that once I’ve bought the hardware its mine to do with as I please and tethering falls under that realm. Tethering is one of those things that really shouldn’t be an issue and Google capitulating to the carriers just shows how difficult it is to operate in the mobile space, especially if you’re striving to make it as open as you possibly can.
It was almost 20 hours ago that I woke up to the rude sound of my alarm, blaring out random garbles in a feeble attempt to wake me from my slumber. Today was the day I’d set out for the USA and my first plane was due to leave at 8am, just 2 hours away. Wait laid before me was a grand total of 20 hours of flight time and an entire day lost to the mere act of travelling. Still my wife and I were excited for our first long trip overseas together, even though we’d be spending the first 10 days of it apart. With all that running through our heads we made our way to the airport thanks to our good friend Danne, who volunteered his services not only as a chaffer but as our house sitter as well as we gallivanted around the lucky country.
The flight over was not as bad as I had expected. I’d been on a long haul flight before, 8 hours to Japan back in 2001, but this was going to be 13 hours and 33 minutes. The prospect was made even more uncomfortable by the fact that upon checking in we were told that there would be a seat between us, and no indication if it was filled or not. Luckily for us it wasn’t and we enjoyed the extra space and convenience that it provided. I was able to get 6 hours or so of sleep but Rebecca, as always, struggled to get even a couple minutes. She didn’t seem any worse for wear because of it though, but I guess after dealing with insomnia for so many years you get used to running on nothing. The food and service was quite good for the ticket price we paid, I was wholly expecting to get nickel and dimed for each and every little thing but Delta Airlines felt almost identical to the Qantas flight we had taken hours earlier.
A long 13 hours later we were in LAX, the thriving hub of transportation that it is. After disembarking we were lead to immigration where they took not only our entire set of fingerprints but also our photo. I’d known for a long time that the USA had been doing this and whilst I didn’t object to doing it, I still didn’t feel completely comfortable with this piece of security theatre. Still it was painless at least and once we were out of there our bags were waiting for us, ready to be picked up. After spending a confusing 30 minutes trying to figure out where each of us had to go (Rebecca is going onto Canada, myself Orlando) we finally found the shuttle Rebecca had to take. Mere minutes later it arrived and she was whisked away to LAX Terminal 2 where she would catch her flight to Canada.
I stumbled around trying to find my way into the terminal that would take me to my final destination on this leg of my journey, getting hopelessly lost in the desolate landscape of LAX. I eventually found my way there through a long corridor that started evoking images of Orwell’s 1984, with a loudspeaker blaring warnings and my footsteps echoing in the lonely fluorescence. Then I was greeted with the friendly face of the TSA and my first ever American airport security check. They went over everyone’s ID with a UV light, took people’s bottles of water, made everyone take off their shoes and frisked about 1 in every 5 passengers. Suddenly the Australian security checks seemed mild in comparison. I got through with barely a second glance, but yet again I had that terrible feeling that my civil liberties were dying as the USA’s paranoia. This country didn’t make the greatest first impression.
I tried fruitlessly to find wifi and a working ATM, the lifeblood of my generation. None of the ATMs could do a cash withdrawal on my cards, even the Westpac one that’s apparently in cahoots with the Bank of America (which I was trying to use). All the wifi hotspots were either secured or paid portals leaving me disconnected and alone. I did nothing for almost an hour before sitting down to write this, thinking there was no point if I couldn’t publish it right away. Still writing is a great way to pass the time and I still had over an hour before my next flight was scheduled to depart.
The flight to Orlando was painful, even though I lucked out with the emergency exit row. Neither of my temporary travel friends were interested in striking up a conversation and the jet lag was setting in with vengeance. Couple that with my bony ass being unable to find comfort in the seats and it was 5 hours in the air that couldn’t go fast enough. I eventually found solace in one of the books I had picked up (Pandora’s Star by Peter F. Hamilton) and managed to pass the majority of time without too much fuss. Then came the dreaded moment, would my luggage be there to greet me when I landed?
Although I’ve never lost anything through the airports I still have a healthy paranoia about them. If it’s anything but a direct flight I always think it’s going to get lost in the airport machine, doomed to bounce endlessly around the globe while I lay stranded, devoid of my clothes and other miscellany. 10 minutes after landing however there my bag was, just as I had left it at LAX 6 hours earlier. Flush with the victory of picking up my luggage I made a break for my hotel for the night, the Hyatt Regency at the Orlando airport.
Unbeknownst to me the large atrium I had walked through to get my bags was in fact the hotel itself. After grabbing my keys I went to my room, which as it turns out is quite opulent. After quickly changing into something more comfortable I went to the gym for a quick workout before making my way out for dinner. I decided to try the in hotel restaurant, McCoy’s Bar and Grill. The food was so-so but the Californian wine was quite good and the service was unlike anything I had ever experienced before. This definitely was capitalism taken to the extreme where minimum wage workers fight their way out of there by providing you the ultimate in service. Having dinner out in Australia feels like getting spat in the face by comparison.
And now I’ve resigned myself to finishing off the $30 bottle of wine I have beside me and watching the Discovery channel until I pass out. Hopefully my plan skirts around the horrible jet lag I felt earlier, but either way tomorrow I take on the challenge of trying to drive on the wrong side of the road in a Toyota Corolla, in preparation for one of the reasons I came here: to drive a corvette around Florida for a week.
It really never fails to suprise me how much meddling the American congress does in NASA’s affairs, given the fact that their budget takes up a whopping 0.58% of total US government spending. The past 3 decades have seen many of NASA’s great ideas turned on their heads either due to horrible design by committee or from being given directives from people who have absolutlely zero aerospace knowledge. More recently though I grew to apperciate the new direction that Obama had laid out for NASA because, unlike Bush’s vision for space exploration, it was achievable and would lay the groundwork for future missions that would reach further into space than ever before. It seems however that NASA is still struggling to shrug off some of the pork barrel politics that had plagued it in the past and which are now threatening to ruin NASA’s future completely.
Specifically there’s a recent piece of news that tells us that the senate sub-committee in charge of NASA oversight is preparing a bill to derail Obama’s new vision for space:
Though the bill effectively cancels the delayed and over-budget Constellation moon-rocket program — as Obama requested in his NASA budget — it would repurpose that money to build a new heavy-lift rocket while largely ignoring the president’s call to fund new space-faring technology and commercial rockets that would send humans into space.
But his dramatic overhaul of the human-spaceflight program has faced fierce resistance on Capitol Hill, especially from lawmakers in states with other NASA centers or with big NASA contracts like Utah, where the solid-rocket motor that would have powered Constellation’s Ares rockets is manufactured.
The Senate bill, which if passed would lay out the direction of the space program for the next three years, would revive the fortunes of Utah’s solid-rocket maker, ATK, by requiring NASA to keep using its solid-rocket motors for a new heavy-lift rocket.
Alright I can understand that it would be hard for any congress critter to not fight for the jobs of his constituents but realistically the writing has been on the wall for sometime for these folk. The retirement of the shuttles and the infrastructure they rely on was announced over 5 years ago but of course due to the fact that the end date was well outside the current election term there was little resistance to it then. Now that we’re halfway through the current term (with the scheduled end looking to be occuring just a year before the next election) dropping all those jobs that the shuttle program supports doesn’t look too good and they’re fighting it by any means necessary.
Realistically though it’s just an exercise in pork barrel politics. If you take a look at the shuttle’s components you’ll notice that they’re not all made in the same area. That’s fair enough, sometimes you just don’t have the infrastructure. However the reason behind it was pure politics as all of the districts surrounding the Kennedy Space Center wanted a piece of the shuttle pie. As a result the external tanks are made in New Orleans, SRBs in Utah and the Space Shuttle Main Engines in California¹ with each component having to be shipped over to be assembled at the KSC. It spreads the pork around a fair bit but the efficency of the NASA program suffers as a result.
There are of course those who are taking this as a signal that congress supports an alternative vision that a group of NASA engineers have proposed, called DIRECT. Now I’ve always cast a skeptical eye over the DIRECT proposal as whilst it does take advantage of a lot of current infrastructure and reduces the launch gap considerably (on paper) it’s never really got any official traction. Additionally it keeps NASA in the business of designing rockets to use for the rather rudimentary activities that are now being taken over by private space organisations. Thus whilst there might be significant cost savings in comparison to the Ares series of rockets they still pale in comparison to commerical offerings. I still support the idea of NASA developing a new heavy lift launch system solely because it has no current commerical application, but while DIRECT does give this as an option it fails to get away from the inefficencies that plague the shuttle program (namely the giant standing army of people).
Hopefully this proposal doesn’t get any traction as it would just ruin the solid plan that Obama had laid down for the future of humanity in space. It’s time for NASA to break the chains that have been holding it back for so long handing over some of its capabilities to those who can do it cheaper, safer and faster. Only then can NASA hope to return to the days of being a pioneer in space rather than languishing as the glorified taxi service to the ISS, as many would have it be.
¹I can’t 100% guarantee the build location of the SSMEs as Rocketdyne has several locations and I can’t seem to find an official source for their build location. As far as I can tell however, they’re built somewhere different again from New Orleans or Utah.
You know whilst I appreciate that the Internet filter was the trigger for the creation of this blog and has been a healthy source of fodder for me to post on I still wish it would just up and die already. It’s been said time and time again that the filter won’t achieve its goals and will only serve to make Australia more of an Internet backwater than it already is. When you’re planning to roll out a national broadband network at the same time it seems rather counter-intuitive to go ahead and strangle it with an infrastructure bottle-neck that makes said network almost null and void.
That being said I still stand by my position that the filter, at least in its current form, will not make its way into reality. The tech crowd is universally opposed to it and there’s increasing pressure from the giants of the Internet (Google, et al) to abandon such ideas. It seems now that even our good friends across the ocean are starting to have concerns that such a policy would be harmful not only to Australia and its citizens, but also to relations abroad:
Asked about the US view on the filter plan US State Department spokesman Noel Clay said: “The US and Australia are close partners on issues related to cyber matters generally, including national security and economic issues.
…In a speech in January US Secretary of State Hillary Clinton put internet freedom at the heart of American foreign policy as part of what she called “21st century statecraft”. The US, she said, would be seeking to resist efforts by governments around the world to curb the free flow of information on the internet and encouraged US media organisations to “take a proactive role in challenging foreign governments’ demands for censorship”.
Clay’s statement added: “The US Government’s position on internet freedom issues is well known, expressed most recently in Secretary Clinton’s January 21st address. We are committed to advancing the free flow of information, which we view as vital to economic prosperity and preserving open societies globally.”
Conroy’s first response was to say that hey hadn’t heard anything and failed to make any comment on what his opinion was on the matter. I don’t blame him for doing that either as up until recently he was only fighting the people of Australia and a few corporations. Now he’s got to deal with the US putting pressure on him to not go ahead with his proposal and he can’t openly attack them like he has done with Google leaving him with very few rhetorical options. I’m sure his spin doctors are working overtime on this one and I don’t envy the job they have (I mean really how to do brush off an attack from the US government?).
More importantly there’s also the small issue of an agreement that Australia and the US signed in about 6 years ago, the Australia – United States Free Trade Agreement. Back when it was first introduced there was hefty opposition to the proposal, mostly from Australia’s side, as it had the potential to wreck havok on things like the Pharmaceutical Benefits Scheme (PBS) and forced Australia to make changes to its intellectual property laws. Despite all this the agreement passed and came into effect on the 1st of January 2005 and hasn’t really come up in political discussions since.
The FTA was much futher reaching than the issues that were brought up in during negotiations. Other areas it covered were financial services, environmental issues, investment and government procurement. More interestingly however there are 2 key areas that the FTA covers that are quite likely to be affected by the proposed Internet filter, and they are:
This section details agreed upon terms by both countries to assure fair trade between the telecommunications industries in each country. The rules specifically exclude measures relating to broadcast or cable distribution of radio or television programming.
Among other provisions, the agreement lays out rules for settling disputes among the members of the telecommunications industries in one country with the members in the other. It entitles enterprises to:
- seek timely review by a regulator or court to resolve disputes;
- seek review of disputes regarding appropriate terms, conditions, and rates for interconnection; and
- to obtain judicial review of a determination by a regulatory body.
The parties agreed to co-operate on mechanisms to facilitate electronic commerce, not to impose customs duties on digital products and for each to apply non-discriminatory treatment to the digital products of the other.
The first relates to how Australia and the US will provide communications infrastructure and services to each other in a fair and equitable way and provides a framework for settling disputes. The bolded point outlines an area where the FTA could be invoked if Australia decides to implement a filter. Whilst the debate is still open on just how much an Internet filter would harm Australia’s ability to do business on the Internet the greater tech community is of the mind that it will be detrimental, regardless of implementation. Whilst this doesn’t directly damage the FTA it could be used as an injunction to stop such a filter from becoming reality, at least for a short while.
Probably the more important part of the FTA that is directly affected by the implementation of the filter is the Electronic Commerce section which explicitly states that there be no discriminatory treatment to digital products. This can extend to information on subjects such as abortion, euthanasia or drug harm minimisation which under the current filter proposal would be outright banned, but are still perfectly legal within the US. There’s also the possibility, thanks to the lack of transparency of the filter and its blacklist, that an online retailer could end up blocked from people within Australia and be effectively barred from trading with us.
I’ll admit that the links to the FTA are a bit tenuous but there’s no doubt in my mind that businesses with an online presence in Australia will suffer under the proposed filter legislation. The FTA is just another bit of ammunition to argue against the filter and with the US now putting pressure on Conroy I’m sure that we’re not too far away from the FTA being mentioned at a higher level. Conroy really has his work cut out for him if he thinks he’ll be able to convince the US that the filter is a good idea.
Would the filter require the FTA to be amended? I doubt it, but then again I’m not particularly qualified to comment on that. If you know (or have a good opinion) let me know in the comments below.
Tip of the hat to David Cottrill for giving me the idea of mashing the FTA with the Internet filter.