Widespread vaccination programs have been the key to driving many crippling diseases to extinction. This boils down to one, simple, irrefutable fact: they work and are incredibly safe. However the anti-vaccination movement, which asserts all sorts of non-scientific dribble, has caused vaccine rates to drop to levels where herd immunity starts to become compromised. This presents a number of challenges as unvaccinated children and adults are not only a threat to themselves but to others who have contact with them. Indeed the problem may be worse than first thought as it appears that even among those who do vaccinate the completion rate is low, with 1 in 3 two year olds in the USA not having completed the recommended vaccination course.
The study, published RTI International (a non-profit research institute based in North Carolina), showed that up until a child was 2 years old the state of their vaccinations was quite fluid. Indeed the vast majority of children weren’t compliant with the required vaccination schedule with most of them receiving a dose outside the recommended window. Upon reaching approximately 24 months of age however most had caught up with the required schedule although a staggering 33% of them were still non-compliant at this age. This might not seem like much of an issue since the majority do eventually get their vaccinations however there are sound scientific reasons for the scheduling of vaccines. Ignoring them has the potential to limit, or completely negate, their efficacy.
The standard vaccine schedule has been developed to maximise the efficacy of vaccines and also to reduce the risk that, should a child contract that disease, potentially life threatening complications are reduced or eliminated. The pertussis (whooping cough) vaccine is estimated to have an extremely high efficacy rate in young children, up to 95%, but that begins to drop off rapidly if the vaccine is administered later in life. Similar efficacy slopes are seen in other childhood disease vaccines such as the combined MMR vaccine. At the same time these vaccines are administered around the time when the potential impacts of the disease are at their greatest. Missing a vaccine at that point runs the risk of severe complications should the disease be contracted at that point.
It’s unsurprising that the study found that the western states had the lowest rates of vaccination as that’s where the anti-vaccination movement has been most active. Just this year there was an outbreak of measles there and the year before that there was a whooping cough epidemic. Interestingly the southern states had the highest rates of vaccination as shown by the snippet of this infographic above. Whilst the anti-vaccination movement is undeniably an influence in the hodge-podge vaccination approach that seems prevalent the blame here lies solely on the parents who aren’t adhering to the vaccination schedule better.
It’s understandable that some of these things can slip as the challenges of being a parent are unending but when it comes to their health there’s really no other competing priority. For parents this means that they’ll need to pay better attention to their doctor’s advice and ensure that the vaccine schedule is adhered to more closely. Additionally the government could readily help in alleviating this issue by developing better reminder systems, ones that are more in tune with the modern parent’s lives. Hopefully these statistics alone will be enough to jar most into action.
If you’ve worked in IT with a government organisation you’ll know the term “data sovereignty”. For those who haven’t had the pleasure the term refers to the laws that apply to data in the location that it’s stored in. When dealing with government entities this means that service providers have to make guarantees that the data won’t leave the Australian shores. Because, if it did, then the data wouldn’t be subject to Australian law any more and whatever government got a hold of it would be outside Australia’s jurisdiction. This has been the major limiting factor in the Australian Government’s adoption of cloud services as, until just recently, the major providers didn’t have an Australian presence. However even that might not suffice soon as the US government is attempting to break the idea of data sovereignty by requiring companies to disclose data that’s not within their jurisdiction.
This issue has arisen out of a long running court case that the US government has had against Microsoft. Essentially authorities in the USA want access to information that is stored on Microsoft servers in Dublin, Ireland. Their argument is that since Microsoft is in control of the servers they’re on the hook to provide the data. Microsoft’s argument has been that the US government should make that request from authorities within that jurisdiction. Indeed senior legal counsel from the Irish Supreme Court has said that such a request could be made under the Mutual Legal Assistance Treaty. This hasn’t satisfied the US authorities who believe that since the company is based in the USA all the data they control should be made available to them under their legal jurisdiction.
Putting aside the privacy concerns for the moment (and believe me there are many) if the US courts compel Microsoft to provide data from outside their jurisdiction then the notion of data sovereignty on any cloud service becomes null and void. No longer will anyone be able to assume that their data is subject to the laws of the country it resides in which raises a whole host of legal issues. Do companies that make use of locally provided but not locally owned services need to comply with US data retention laws like SOX? Are these requests for data going to be held to the same level of evidence requirements that other countries have? What’s stopping the US government from compelling US based companies from requesting other government’s data on these services? I could go on but it all comes down to the issue of the US government completely overstepping its jurisdiction.
For someone like me, who works primarily in the large government IT space, the attack feels even more personal. I’ve been a champion of cloud services for years and it’s only been recently that I’ve been able to make use of the public cloud with my clients. Should the US government continue with (and win) this case the ramifications will be instantaneous: all the government services running on cloud services will be in-housed as soon as possible. That’s not to mention the potential effects it could have on how international companies like mine will interact with government. Suddenly we wouldn’t be able to work with any client related data except when we’re on site, a tremendous blow to the way we do business.
The US government needs to realise just how damaging something like this could be both to their reputation internationally and the business that US based companies do elsewhere. Data sovereignty laws exist for a reason and breaking them just because your law enforcement agency doesn’t want to go through the proper channels isn’t a good enough excuse. If they continue down this path the IT industry will suffer immensely as a result and for nothing more than some saved paperwork and inflated egos.
Grow up, USA. Seriously.
I’m no conspiracy theorist, my feet are way too firmly planted in the world of testable observations to fall for that level of crazy, but I do love it when we the public get to see the inner workings of secretive programs, government or otherwise. Part of it is sheer voyeurism but if I’m truthful the things that really get me are the big technical projects, things that done without the veil of secrecy would be wondrous in their own right. The fact that they’re hidden from public view just adds to the intrigue, making you wonder why such things needed to be kept secret in the first place.
One of the first things that comes to mind was the HEXAGON series of spy satellites which were high resolution observation platforms launched during the cold war that still rival the resolution of satellites launched today. It’s no secret that all space fairing nations have fleets of satellites up there for such purposes but the fact that the USA was able to keep the exact nature of the entire program secret for so long is quite astounding. The technology behind it though was what really intrigued me as it really was years ahead of the curve in terms of capabilities, even if it didn’t have the longevity of its fully digital progeny.
Yesterday however a friend sent me this document from the Electronic Frontier Foundation which provides details on something called the Presidential Surveillance Program (PSP). I was instantly intrigued.
According to William Binney, a former head of the National Security Agency the PSP is in essence a massive data gathering program with possible intercepts at all major fibre terminations within the USA. The system simply siphons off all incoming and outgoing data which is then stored in massive, disparate data repositories. This in itself is a mind boggling endeavour as the amount of data that transits the Internet in a single day dwarfs the capacity of most large data centres. The NSA then ramps it up a notch by being able to recover files, emails and all sorts of other data based on keywords and pattern matching which implies heuristics on a level that’s just simply mind blowing. Of course this is all I’ve got to go on at the moment but the idea itself is quite intriguing.
For starters creating a network that’s able to handle a direct tap to a fibre connection is no small feat in itself. When the fibres terminating at the USA border are capable of speeds in the GB/s range the require infrastructure to handle that is non-trivial, especially so if you want to store that data later. Storing that amount of data is another matter entirely as most commercial arrays begin to tap out in the petabyte range. Binney’s claims start to seem a little far fetched here as he states there are plans up into the yottabyte range but concedes that current incarnations of the program couldn’t have more than tens of exabytes. Barring some major shake up in the way we store data I can’t fathom how they’d manage to create an array that big. Then again I don’t work for the NSA.
As intriguing as such a system might be there’s no question that its existence is a major violation of privacy for US citizens and the wider world. Such a system is akin to tapping every single phone and recording every conversation on it which is most definitely not supported by their current legal system. Just because they don’t use it until the have a reason to doesn’t make it just either as all data gathered without the suspicion of guilt or pretence to commit a crime is illegitimate. I could think of many legitimate uses for the data (anonymous analytical stuff could prove very useful) but the means by which its was gathered eliminates any purpose being legitimate.
In my travels through the USA I became intimately acquainted with their high level of airport security. Upon entering the country we were finger printed, photographed and grilled about what our trip was about. There was also the long lines for getting through the metal detectors and full body scanners, usually taking up a good 45 minutes of my time to get through. I was never chosen to go through the backscatter x-ray machines (nor did I see any of the newer millimetre wave ones) but I did see many people go through it. Most of them weren’t exactly what you’d call a security risk (mostly people in wheelchairs) but I knew exactly why those machines were there: to make everyone feel safer without actually being so.
This is what is referred to as security theatre. These scanners are supposedly better at detecting things that slip by metal detectors which they accomplish by using low-energy x-rays that penetrate through clothing. Solid objects then should become obvious and should something suspicious be identified the passenger can be taken aside for further searching. Trouble is the machines aren’t terribly effective at what they’re designed to do and the back-scatter x-ray type machines emit ionizing radiation (not a lot mind you, but there’s been minimal research done into them). Using them then seems like a pointless exercise and indeed even though they’ve been in operation in the USA for quite some time the jury is still out on whether they’re actually being effective or not.
So you can then imagine my surprise when I find out that we’ll be getting these scanners at all international airports in Australia:
PASSENGERS at airports across Australia will be forced to undergo full-body scans or be banned from flying under new laws to be introduced into Federal Parliament this week.
In a radical $28 million security overhaul, the scanners will be installed at all international airports from July and follows trials at Sydney and Melbourne in August and September last year.
The Government is touting the technology as the most advanced available, with the equipment able to detect metallic and non-metallic items beneath clothing.
Now we won’t be getting the dubious back-scatter style ones here instead we’ll have the newer millimetre wave ones that don’t emit ionizing radiation. That’s the only good news though as they’ve also amended the legislation that allows you to turn down things like this in favour of a pat down, with the penalty for refusing to go through one being that you’ll be barred from your flight. To top it all off the transport minister Anthony Albanese sealed it with this choice quote “I think the public understands that we live in a world where there are threats to our security and experience shows they want the peace of mind that comes with knowing government is doing all it can”.
It’s almost like he knows these things are a useless piece of security theatre, but is going ahead with them anyway.
More than a decade has past since the events of 11/9/2001 and we’ve yet to see a repeat, or an attempted repeat, of the events that led up to that tragedy here or overseas. The health and privacy concerns aside the reality is that these scanners don’t really accomplish what they’re designed to do and are thus just another inconvenience and waste of tax payer dollars. I can understand that there are some who will feel safer by seeing them there but that doesn’t change the facts that they’re just another piece of security theatre, and a costly one at that.
For over 100 years rights holders have resisted any changes to their business models brought about by changes in technology. From a business perspective its hard to blame them, I mean who wouldn’t do everything in their power to ensure you could keep making money, but history has shown that no matter how hard they fight it they will eventually lose out. Realistically the world has moved on and instead of attempting to keep the status quo rights holders should be looking for ways to exploit these new technologies to their advantage, not ignore them or try to legislate them away. Indeed if other industries followed suit you’d have laws preventing you from developing automated transport to save the buggy whip industry.
The copyright system that the USA employs is a great example of where legislation can go too far at the request of an industry failing to embrace change. At its inception the copyrights were much like patents: time limited exclusivity deals that enabled a creator to profit from their endeavours for a set period of time after which they would enter the public domain. This meant that as time went on there would be an ever growing collection of public knowledge that would benefit everyone and not just those who held the patent. However unlike the patent system copyrights in the USA have seen massive reform in the past, enough so that works that would have come into the public domain will probably never do so.
Thankfully, whilst the copyright system might be the product of an arms race between innovators and rights holders, that hasn’t stop innovation in the areas where the two meet. Most of this can be traced back to provisions made in the Digital Millennium Copyright Act (DMCA) that granted safe harbour to any site that relied on user generated content. In essence it put the burden of work on the rights holders themselves, requiring them to notify a site about infringing works. The site was then fully protected from legal action should they comply with the request, even if they restore the offending material after receiving a counter claim from the alleged offender. Many sites rely on this safe harbour in order to continue running on the web because the reverse, them policing copyright themselves, is both technically challenging and resource intensive.
However just like all the technologies and provisions that have been made for the rights holder industry previously those safe harbour provisions, which enabled many of the world’s top websites to flourish, are seen as a threat to their business models. Rights holders associations have said that the DMCA as it stands right now is too lenient and have lobbied for changes that would better support their business. This has come in the form of 2 recent bills that have dropped in both houses: the PROTECT IP Act (PIPA) in the senate and the Stop Online Piracy Act (SOPA) in the house of reps. Both of these bills have attracted heavy criticism from the technology and investment sectors and it’s easy to see why.
At their core the bills are essentially the same. Both of them look to strengthen the powers that rights holders have in pursuing copyright infringers whilst at the same time weakening the safe harbour provisions that were created under the DMCA. Additionally many of the mechanisms described in the bill are at odds with the way that the Internet is designed to work, breaking many of the ideals that were set out in order to ensure ubiquitous access. There’s also many civil liberty issues at stake here and whilst bill supporters have assured everyone that they don’t impact on them in any way the wording of the bill is vague enough to support both interpretations.
The main issue I and many others take with these bills is the shifting of the burden of proof (and thus responsibility) away from the rights holders and onto the web site owners. The changes SOPA advocates mean that web site administrators will be responsible for identifying copyrighted material and then removing it from their website, lest they fall prey to having their domain seized. Whilst this more than likely won’t be the downfall of the sites that made their fame inside the safe harbours of the DMCA it would have a chilling effect on start-ups looking to innovate in an area that would have anything to do with a rights holder group. Indeed it would be the sites that have limited resources that would be hit the hardest as patrolling for copyright infringement isn’t a fully automated process yet and the burden could be enough to drive them under.
It’s also evident that SOPA was put together rather haphazardly when some of the most known copyrights infringement sites, like The Pirate Bay, are actually immune to it. Indeed many sites that rights holders complain about aren’t covered by SOPA (just by the current laws which, from what I can tell, means they’re not going anywhere) and thus the bill will have little impact on their activities.
You might be wondering why I, an Australian who’s only ever been to the USA once, would care about something like SOPA. Disregarding for the moment the principle argument and the fact that I don’t want to see the USA technology sector die (I could justify my point easily with either) the unfortunate reality is that Australia has a rather liberal free trade agreement with the USA. What this means is that not only do we trade with them free of tariffs and duties but we’re also obliged to comply with their laws which affect trade. SOPA is one such bill and should it pass it’s highly likely that we’d be compelled to either implement a similar law ourselves or simply enforce theirs. Don’t think that would happen? A leaked letter from the American ambassador to Spain warned them that not passing a SOPA like bill would see them put on a trade blacklist effectively ending trade between the two countries. This is just another reason as to why everyone, not just Americans, should oppose SOPA in its current form.
The worst part of all of this is the potential for my site, the one I’ve been blogging on for over 3 years, to come under fire. I link to a whole bunch of different places and simply doing so could open me up to domain seizure, even if it wasn’t me putting the link there. I already have limited time to spend on here and the additional task of playing copyright police would surely have an impact on how often I could post and comment. I don’t want to stop writing and I don’t want people to stop commenting but SOPA has the very real potential to make both those activities untenable.
So what can be done about SOPA and its potential chilling effects on our Internet ecosystem? For starters if you’re an American citizen write your representative and tell them to oppose SOPA. If you’re not then the best you can do is help to raise awareness of this issue, as whilst it’s a big issue in the tech circles, even some of the most versed political pundits were unaware of SOPA’s existence until recently. Past that we just have to hope we’ve made enough of an impression on the USA congress critters so that the bill doesn’t pass, at least in its current form. The hard work of many people has made this a very public issue, but only continued pressure will make it so it won’t damage the Internet and the industries it now supports.
EDIT: It appears that the strong opposition has caused the American congress to shelve SOPA indefinitely. Count that as a win for sanity.
Maybe it’s the combination of mission secrecy and close resemblance to the now retired shuttle fleet but the X-37B seems to get more press than any other space craft currently flying in orbit. When I first saw the diminutive shuttle cousin back in April last year I figured it was just a unique experiment that the Department of Defence was carrying out and the rumours about it’s satellite capturing capabilities were greatly exaggerated. Indeed towards the end of the mission I investigated the idea that it was already performing such a task but based on the current trajectories of other satellites it didn’t seem like that was the case. The X-37B blasted off once again at the start of this year again shrouded in mystery as to what its actual mission was and it’s been up there ever since.
That last fact is interesting as the X-37B’s stated capabilities put it at being on-orbit for a maximum of 270 days. The deadline for its return to earth would have then been around November 30th, a date that has well past now. The United States Air Force has stated that its mission has been extended and it should be on orbit for a while to come. This is interesting because it tells us that the X-37B is a lot more capable than they’ve state it is. Whilst this could just be good old fashioned American over-engineering it does lead some credence to the theories that there’s a whole bunch of capabilities hidden within the X-37B that aren’t officially there.
What’s been really interesting however are the discussions surrounding a potential manned variant of the X-37B. As it stands the X-37B is quite a small craft, measuring a mere 10 meters long and a payload bay that’s only got a few cubic meters of storage space. Overall its very similar in size to the Soyuz craft so there’s definitely some potential for it to be converted. Rumour has it that the X-37B would be elongated significantly though, bringing its total length up to 14 metres with enough space to sit 7 astronauts. Granted it wouldn’t be as roomy as the shuttle was (nor could it deliver non-crew payloads at the same time) but it would be a quick path to restoring the USA’s manned flight capability. That would hinge on the man rating Atlas V launch system which is currently under investigation.
It’s not just space nuts that are getting all aflutter about the X-37B either. China has expressed concerns that the X-37B could be used as a orbital weapons delivery system. The secrecy surrounding the actual mission profiles that the X-37B has been flying is probably what has prompted these concerns and it being under the sole purview of the Air Force doesn’t help matters. In all honesty I doubt the X-37B would be used as a weapons platform since it’s more of a generalist/reconnaissance craft than a weapons platform. If there’s someone you want to worry about launching weapons into orbit it would be the Russians as they are the only (confirmed) nation to have launched armed craft. A dedicated weapons platform would also look nothing like the X-37B, especially if it was going to be designed for on-orbit combat (who needs wings in space?).
The next couple months will give us some more insight as to the true purpose of the X-37B. It’s quite likely that these first couple flights have just been complete shake downs of all the systems that make up the X-37B with the first flight being orbital manoeuvring verification and the current flight being an endurance test. Should it stay up there for a significant amount of time it’s more likely that it’s some form of advanced reconnaissance craft rather than something crazy like a satellite capturer or orbital weapons platform. The prospect of a manned variant is quite exciting and I’ll be waiting anxiously to see if the USA pursues that as an option.
There’s a saying amongst the space enthusiast community that the shuttle only continued on for so long in order to build the International Space Station and the ISS only existed so that the shuttle had some place to go. Indeed for the last 13 years of the shuttle program it pretty much exclusively visited the ISS taking only a few missions elsewhere, usually to service the Hubble Space Telescope. With the shuttle now retired many are looking now looking towards the future of the ISS and the various manned space programs that have contributed to its creation. It’s now looking very likely that the ISS will face the same fate as Mir did before it, but there are a multitude of possibilities of what could be done instead.
Originally the ISS was slated for decommission in 2016 and with it still not being fully constructed (it is expected to be finished by next year) that would give it a full useful life of only 4 years. The deadline was extended back in 2009 to 2020 in order to more closely match the designed functional lifetime of 7 years and hopefully recoup some of the massive investment that has gone into it. It was a good move and many of the ISS components are designed to last well beyond that deadline (especially the Russian ones which can be refurbished on orbit) and there’s still plenty of science that can be done using it as a platform.
The ISS, like Mir before it, has only one option for retirement: a fiery plunge through the atmosphere into a watery grave. Whilst there’s been lots of talk of boosting it up to a higher orbit, sending it to the moon or even using it as an interplanetary craft all these ideas are simply infeasible. The ISS was designed and built to be stuck in low earth orbit its entire life with many assumptions made that preclude it from going any further. It lacks the proper shielding to go any higher than say the Hubble Space Telescope and the structure is too weak to withstand the required amount of thrust that would get it to a transit orbit (at least in any reasonable time frame). The modifications required to make such ideas feasible would be akin to rebuilding the entire station again and thus to avoid cluttering up the already cluttered area of low earth orbit it must be sent back down to earth.
Russia however has expressed interest in keeping at least some of the parts of the ISS in orbit past the 2020 deadline. It appears they want to use them as a base for their next generation space station OPSEK. This space station would differ significantly from all the previous space stations in that it would be focused on deep space exploration activities rather than direct science like its predecessors were. It would seem that those plans have hit some roadblocks as the Russian Federal Space Agency has recently stated that the ISS will need to be de-orbited at the end of its life. Of course there’s still a good 8 years to go before this will happen and the space game could change completely between now and then, thanks in part to China and the private space industry.
China has tried to be part of the ISS project in the past but has usually faced strong opposition from the USA. So strong was the opposition that they have now started their own independent manned space program with an eye to set up their own permanent space station called Tiangong. China has already succeeded in putting several people into space and even successfully conducted an extravehicular activity (EVA), showing that they have much of the needed technology to build and maintain a presence in space. Coincidentally much of their technology was imported from Russia meaning that their craft are technically capable of docking with the Russian segments of the ISS. That’s also good news for Russia as well as their Soyuz craft could provide transport services to Tiangong in the future.
Private space companies are also changing the space ecosystem significantly, both in regards to transport costs and providing services in space. SpaceX has just been approved to roll up two of its demonstration missions to the ISS which means that the next Dragon capsule will actually end up docking with the ISS. Should this prove successful SpaceX would then begin flying routine cargo missions to the ISS and man rating of their capsule would begin in earnest. Couple this with Bigelow Aerospace gearing up to launch their next inflatable space habitat in 2014~2015 the possibility of the ISS being re-purposed by private industry becomes a possible (if slightly far fetched) idea.
The next decade is definitely going to be one of the most fascinating ones for space technologies. The power international power dynamic is shifting considerably with super powers giving way to private industry and new players wowing the world stage with the capabilities. We may not have a definitive future for the ISS but its creation and continued use has provided much of the ground work necessary to flag in the next era of space.
It’s nigh on impossible to make a system completely secure from outside threats, especially if it’s going to be available to the general public. Still there are certain measures you can take that will make it a lot harder for a would be attacker to get at your users’ private data, which is usually enough for them to give up and move onto another more vulnerable target. However, as my previous posts on the matters of security have shown, many companies (especially start ups) eschew security in favor of working on new features or improving user experience. This might help in the short term to get users in the door, but you run the very real risk of being compromised by a malicious attacker.
The attacker might not even be entirely malicious, as what appears to be the case with one of the newest hacker groups who are calling themselves LulzSec. There’s a lot of speculation as to who they actually are but their Twitter alludes to the fact that they were originally part of Anonymous, but decided to leave them since they disagreed with the targets they were going after and were more in it for lulz than anything else. Their targets range drastically from banks to game companies and even the USA senate with the causes changing just as wildly, ranging from simply for the fun of it to retaliations for wrong doings by corporations and politicians. It would be easy to brand them as anarchists just out to cause trouble for the reaction, but some of their handiwork has exposed some serious vulnerabilities in what should have been very secure web services.
One of their recent attacks compromised more than 200,000 Citibank accounts using the online banking system. The attack was nothing sophisticated (although authorities seem to be spinning it as such) with the attackers gaining access by simply changing the identifying URL and then automating the process of downloading all the information they could. In essence Citibank’s system wasn’t verifying that the user accessing a particular URL was authorized to do so, it would be like logging onto Twitter and then typing say Ashton Kutcher’s account name into the URL bar and then being able to send tweets on his behalf. It’s basic authorization at its most fundamental level and LulzSec shouldn’t have been able to exploit such a rudimentary security hole.
There are many other examples of LulzSec hacking various other organisations with the latest round of them all being games development companies. This has drawn the ire of many gamers which just spurred them on to attack even more game and related media outlets just so they could watch the reaction. Whilst it’s kind of hard to take the line of “if you ignore them they’ll go away” when they’re unleashing a DDoS or downloading your users data the attention that’s been lavished on them by the press and butthurt gamers alike is exactly what they’re after, and yes I do get the irony of mentioning that :P. Still had they not been catapulted to Internet stardom so quickly I can’t imagine that they would continue being as brash as they are now, although there is the possibility they might have started out doing even more malicious attacks in order to get attention.
Realistically though the companies that are getting compromised by rudimentary URL and SQL injection attacks only have themselves to blame since these are the most basic security issues that have well known solutions and shouldn’t pose a risk to them. Nintendo showed that they could withstand an attack without any disruptions or loss of sensitive data and LulzSec was quick to post the security hole and then move onto to more lulzy pastures. The DDoSing of others though is a bit more troublesome to deal with, however there are many services (some of them even free) that are designed to mitigate the impact of such an incident. So whilst LulzSec might be a right pain in the backside for many companies and consumers alike their impact would be greatly softened by a strengthening of security at the most rudimentary level and perhaps giving them just a little less attention when they do manage to break through.
I remember getting my first ever phone with a data plan. It was 3 years ago and I remember looking through nearly every carrier’s offerings to see where I could get the best deal. I wasn’t going to get a contract since I change my phone at least once a year (thank you FBT exemption) and I was going to buy the handset outright, so many of the bundle deals going at the time weren’t available to me. I eventually settled on 3 mobile as they had the best of both worlds in terms of plan cost and data, totaling a mere $40/month for $150 worth of calls and 1GB of data. Still when I was talking to them about how the usage was calculated I seemed to hit a nerve over certain use cases.
Now I’m not a big user of mobile data despite my daily consumption of web services on my mobile devices, usually averaging about 200MB/month. Still there have been times that I’ve really needed the extra capacity like when I’m away and need an Internet connection for my laptop. Of course tethering the two devices together doesn’t take much effort at all, my first phone only needed a driver for it to work, and as far as I could tell the requests would look like they were coming directly from my phone. However the sales representatives told me in no uncertain terms that I’d have to get a separate data plan if I wanted to tether my handset or if I dared to plug my sim card into a 3G modem.
Of course upon testing these restrictions I found them to be patently false.
Now it could’ve just been misinformed sales people who got mixed up when I told them what I was planning to do with my new data enabled phone but the idea that tethered Internet usage is somehow different to normal Internet usage wasn’t a new idea to me. In the USA pretty much every carrier will charge you a premium on top of whatever plan you’ve got if you want to tether it to another device, usually providing a special application that enables the functionality. Of course this has spurred people to develop applications that circumvent these restrictions on all the major smart phone platforms (iOS users will have to jailbreak unfortunately) and the carriers aren’t able to tell the difference. But that hasn’t stopped them from taking action against those who would thwart their juicy revenue streams.
Most recently it seems that the carriers have been putting pressure on Google to remove tethering applications from the Android app store:
It seems a few American carriers have started working with Google to disable access to tethering apps in the Android Market in recent weeks, ostensibly because they make it easier for users to circumvent the official tethering capabilities offered on many recent smartphones — capabilities that carry a plan surcharge. Sure, it’s a shame that they’re doing it, but from Verizon’s perspective, it’s all about protecting revenue — business as usual. It’s Google’s role in this soap opera that’s a cause for greater concern.
Whilst this is another unfortunate sign that no matter how hard Google tries to be “open” it will still be at the mercy of the carriers their banning of tethering apps sets a worrying precedent for carriers looking to control the Android platform. Sure they already had a pretty good level of control over it since they all release their own custom versions of Android for handsets on their network but now they’re also exerting pressure over the one part that was ostensibly never meant to be influenced by them. I can understand that they’re just trying to protect their bottom line but the question has to be asked: is tethering really that much of a big deal for them?
It could be that my view is skewed by the Australian way of doing things, where data caps are the norm and the term “unlimited” is either a scam or at dial-up level speeds. Still from what I’ve seen of the USA market many wireless data plans come with caps anyway so the bandwidth argument is out the window. Tethering to a device requires no intervention from the carrier and there are free applications available on nearly every platform that provide the required functionality. In essence the carriers are charging you for a feature that should be free and are now strong-arming Google into protecting their bottom lines.
I’m thankful that this isn’t the norm here in Australia yet but we have an unhealthy habit of imitating our friends in the USA so you can see why this kind of behavior concerns me. Since I’m also a firm believer in the idea that once I’ve bought the hardware its mine to do with as I please and tethering falls under that realm. Tethering is one of those things that really shouldn’t be an issue and Google capitulating to the carriers just shows how difficult it is to operate in the mobile space, especially if you’re striving to make it as open as you possibly can.
It was almost 20 hours ago that I woke up to the rude sound of my alarm, blaring out random garbles in a feeble attempt to wake me from my slumber. Today was the day I’d set out for the USA and my first plane was due to leave at 8am, just 2 hours away. Wait laid before me was a grand total of 20 hours of flight time and an entire day lost to the mere act of travelling. Still my wife and I were excited for our first long trip overseas together, even though we’d be spending the first 10 days of it apart. With all that running through our heads we made our way to the airport thanks to our good friend Danne, who volunteered his services not only as a chaffer but as our house sitter as well as we gallivanted around the lucky country.
The flight over was not as bad as I had expected. I’d been on a long haul flight before, 8 hours to Japan back in 2001, but this was going to be 13 hours and 33 minutes. The prospect was made even more uncomfortable by the fact that upon checking in we were told that there would be a seat between us, and no indication if it was filled or not. Luckily for us it wasn’t and we enjoyed the extra space and convenience that it provided. I was able to get 6 hours or so of sleep but Rebecca, as always, struggled to get even a couple minutes. She didn’t seem any worse for wear because of it though, but I guess after dealing with insomnia for so many years you get used to running on nothing. The food and service was quite good for the ticket price we paid, I was wholly expecting to get nickel and dimed for each and every little thing but Delta Airlines felt almost identical to the Qantas flight we had taken hours earlier.
A long 13 hours later we were in LAX, the thriving hub of transportation that it is. After disembarking we were lead to immigration where they took not only our entire set of fingerprints but also our photo. I’d known for a long time that the USA had been doing this and whilst I didn’t object to doing it, I still didn’t feel completely comfortable with this piece of security theatre. Still it was painless at least and once we were out of there our bags were waiting for us, ready to be picked up. After spending a confusing 30 minutes trying to figure out where each of us had to go (Rebecca is going onto Canada, myself Orlando) we finally found the shuttle Rebecca had to take. Mere minutes later it arrived and she was whisked away to LAX Terminal 2 where she would catch her flight to Canada.
I stumbled around trying to find my way into the terminal that would take me to my final destination on this leg of my journey, getting hopelessly lost in the desolate landscape of LAX. I eventually found my way there through a long corridor that started evoking images of Orwell’s 1984, with a loudspeaker blaring warnings and my footsteps echoing in the lonely fluorescence. Then I was greeted with the friendly face of the TSA and my first ever American airport security check. They went over everyone’s ID with a UV light, took people’s bottles of water, made everyone take off their shoes and frisked about 1 in every 5 passengers. Suddenly the Australian security checks seemed mild in comparison. I got through with barely a second glance, but yet again I had that terrible feeling that my civil liberties were dying as the USA’s paranoia. This country didn’t make the greatest first impression.
I tried fruitlessly to find wifi and a working ATM, the lifeblood of my generation. None of the ATMs could do a cash withdrawal on my cards, even the Westpac one that’s apparently in cahoots with the Bank of America (which I was trying to use). All the wifi hotspots were either secured or paid portals leaving me disconnected and alone. I did nothing for almost an hour before sitting down to write this, thinking there was no point if I couldn’t publish it right away. Still writing is a great way to pass the time and I still had over an hour before my next flight was scheduled to depart.
The flight to Orlando was painful, even though I lucked out with the emergency exit row. Neither of my temporary travel friends were interested in striking up a conversation and the jet lag was setting in with vengeance. Couple that with my bony ass being unable to find comfort in the seats and it was 5 hours in the air that couldn’t go fast enough. I eventually found solace in one of the books I had picked up (Pandora’s Star by Peter F. Hamilton) and managed to pass the majority of time without too much fuss. Then came the dreaded moment, would my luggage be there to greet me when I landed?
Although I’ve never lost anything through the airports I still have a healthy paranoia about them. If it’s anything but a direct flight I always think it’s going to get lost in the airport machine, doomed to bounce endlessly around the globe while I lay stranded, devoid of my clothes and other miscellany. 10 minutes after landing however there my bag was, just as I had left it at LAX 6 hours earlier. Flush with the victory of picking up my luggage I made a break for my hotel for the night, the Hyatt Regency at the Orlando airport.
Unbeknownst to me the large atrium I had walked through to get my bags was in fact the hotel itself. After grabbing my keys I went to my room, which as it turns out is quite opulent. After quickly changing into something more comfortable I went to the gym for a quick workout before making my way out for dinner. I decided to try the in hotel restaurant, McCoy’s Bar and Grill. The food was so-so but the Californian wine was quite good and the service was unlike anything I had ever experienced before. This definitely was capitalism taken to the extreme where minimum wage workers fight their way out of there by providing you the ultimate in service. Having dinner out in Australia feels like getting spat in the face by comparison.
And now I’ve resigned myself to finishing off the $30 bottle of wine I have beside me and watching the Discovery channel until I pass out. Hopefully my plan skirts around the horrible jet lag I felt earlier, but either way tomorrow I take on the challenge of trying to drive on the wrong side of the road in a Toyota Corolla, in preparation for one of the reasons I came here: to drive a corvette around Florida for a week.