Posts Tagged‘broken’

What’s Worse Than a Filter? A Backdoor Curtosey of David Cameron.

Technological enablers aren’t good or evil, they simply exist to facilitate whatever purpose they were designed for. Of course we always aim to maximise the good they’re capable of whilst diminishing the bad, however changing their fundamental characteristics (which are often the sole purpose for their existence) in order to do so is, in my mind, abhorrent. This is why I think things like Internet filters and other solutions which hope to combat the bad parts of the Internet are a fool’s errand as they would seek to destroy the very thing they set out to improve. The latest instalment of which comes to us courtesy of David Cameron who is now seeking to have a sanctioned backdoor to all encrypted communications and to legislate against those who’d resist.

David Cameron Shifty Lookin Fella

Like most election waffle Cameron is strong on rhetoric but weak on substance but you can get the gist of it from this quote:

“I think we cannot allow modern forms of communication to be exempt from the ability, in extremis, with a warrant signed by the home secretary, to be exempt from being listened to.”

Essentially what he’s referring to is the fact that encrypted communications, the ones that are now routinely employed by consumer level applications like WhatsApp and iMessage, shouldn’t be allowed to exist without a method for intelligence agencies to tap into them. It’s not like these communications are exempt from being listened to currently just that it’s infeasible for the security agencies to decrypt them once they’ve got their hands on them. The problem that arises here though is that unlike other means of communication introducing a mechanism like this, a backdoor by which encrypted communications can be decrypted, this fundamentally breaks the utility of the service and introduces a whole slew of potential threats that will be exploited.

The crux of the matter stems from the trust relationships that are required for two way encrypted communications to work. For the most part you’re relying on the channel between both parties to be free from interference and monitoring from interfering parties. This is what allows corporations and governments to spread their networks over the vast reaches of the Internet as they can ensure that information passing through untrusted networks isn’t subject to prying eyes. Taking this proposal into mind any encrypted communications which pass through the UK’s networks could be intercepted, something which I’m sure a lot of corporations wouldn’t like to sign on for. This is not to mention the millions of regular people who rely on encrypted communications for their daily lives, like anyone who’s used Facebook or a secure banking site.

Indeed I believe the risks poses by introducing a backdoor into encrypted communications far outweighs any potential benefits that you’d care to mention. You see any backdoor into a system, no matter how well designed it is, will severely weaken the encrypted channel’s ability to resist intrusion from a malicious attacker. No matter which way you slice it you’re introducing another attack vector into the equation when there was, at most, 2 before you now have at least 3 (the 2 endpoints plus the backdoor). I don’t know about you but I’d rather not increase my risk of being compromised by 50% just because someone might’ve said plutonium on my private chats.

The idea speaks volumes to David Cameron’s lack of understanding of technology as whilst you might be able to get some commercial companies to comply with this you will have no way of stopping peer to peer encrypted communications using open source solutions. Simply put if the government, somehow, managed to get PGP to work a backdoor in it’d be a matter of days before it was no longer used and another solution worked into its place. Sure, you could attempt to prosecute all those people using illegal encryption, but they said the same thing about BitTorrent and I haven’t seen mass arrests yet.

It’s becoming painfully clear that the conservative governments of the world are simply lacking in fundamental understanding of how technology works and thus concoct solutions which simply won’t work in reality. There are far easier ways for them to get the data that they so desperately need (although I’m yet to see the merits of any of these mass surveillance networks) however they seem hell bent on getting it in the most retarded way possible. I would love to say that my generation would be different when they get into power but stupid seems to be an inheritable condition when it comes to conservative politics.

How Everything Went To Shit (or No Admin is Immune to Being Stupid).

This blog has had a pretty good run as far as data retention goes. I’ve been through probably a dozen different servers over its life and every time I’ve managed to maintain continuity of pretty much everything. It’s not because I kept rigorous backups or anything like that, no I was just good at making sure I had all my data moved over and working before I deleted the old one. Sure there’s various bits of data scattered among my hard drives but none of it is readily usable so should the unthinkable happen I was up the proverbial creek without a paddle.

And, of course, late on Saturday night, the unthinkable happened.

Picard FacepalmSo I logged into my blog to check out how everything was going (as I usually do) and noticed that something strange was appearing in my header. It appeared to be some kind of mass mailer although it wasn’t being pulled in from a JavaScript file or anything and, to my surprise, it was embedding itself everywhere, even on the admin panel. Now I’ve never been compromised before, although people have tried, so this sent me into something of a panic and I started Googling my heart out to find out where this damn code was coming from. Try as I might however I couldn’t find the source of it (nothing in the Apache configuration, all WordPress files were uncompromised, other sites I’m hosting weren’t affected) and I resigned myself to rebuild the server and to start anew. Annoying, but nothing I haven’t done before.

Like a good little admin I thought it would be good to do a cleanup of the directory before I embarked on this as I was going to have to move the backup file to my desktop, no small feat considering it was some 1.9GB big and I’m on Australian Internet (thanks Abbott!). I had a previous backup file there which I moved to my /var/www directory to make sure I could download it (I could) and so I looked to cleaning everything else up. I’ve had a couple legacy directories in there for a while and so I decided to remove them. This would have been fine except I fat fingered the command and typed rm -r which happily went about its business deleting the entire folder contents. The next ls I ran sent me into a fit of rage as I struggled to figure out what to do next.

If this was a Windows box it would’ve been a minor inconvenience as I’d just fire up Recuva (if CTRL + Z didn’t work) and get all the files restore however in Linux restoring deleted files seems to be a right pain in the ass. Try as I might extundelete couldn’t restore squat and every other application looked like it required a PhD to operate. The other option was to contact my VPS provider’s support to see if they could help out however since I’m not paying a terrible amount for the service I doubt it would been very expedient, nor would I have expected them to be able to recover anything.

In desperation I reached out to my old VPS provider to see if they still had a copy of my virtual machine. The service had only been cancelled a week ago and I know a lot of them keep copies for a little while just in case something like this happens, mostly because it’s a good source of revenue (I would’ve gladly paid $200 for it). However this morning the email came from them stating unequivocally that the files are gone and there’s no way to get them back, so I was left with very few options to get everything working again.

Thankfully I still had the database which contains much of the configuration information required to get this site back up and running so all that was required was to get the base WordPress install working and then reinstall all the necessary plugins. It was during this exercise that I stumbled across the potential attack vector that let whoever it was ruin my site in the first place: my permissions were all kinds of fucked, essentially allowing open slather to anyone who wanted it. Whilst I’ve since struggled to get everything working like it was before I now know that my permissions are far better than they were and hopefully should keep it from happening again.

As for the rest of the content I have about half of the images I’ve uploaded over the past 5 years in a source folder and, if I was so inclined, could reupload them. However I’ve decided to leave that for the moment as the free CDN that WordPress gives you as part of Jetpack has most of those images in it anyway which is why everything on the front page is working as it should. I may end up doing it anyway just as an exercise to flex my PowerShell skills but it’s no longer a critical issue.

So what has this whole experience taught me? Well mostly that I should practice what I preach as if a customer came running to me in this situation I’d have little sympathy for them and would likely spend maybe 20% of the total effort I’ve spent on this site to try and restore theirs. The unintentional purge has been somewhat good as I’ve dropped many of the plugins I no longer used which has made the site substantially leaner and I’ve moved from having my pants around my ankles, begging for attackers to take advantage of me, to at least holding them around my waist. I’ll also be implementing some kind of rudimentary backup solution so that if this happens again I at least have a point in time to restore to as this whole experience has been far too stressful for my liking and I’d rather not repeat it again.

 

Shit’s Breaking Everywhere, Captain.

So it turns out that my blog has been down for the last 2 days and I, in my infinite wisdom, failed to notice this. It seems like no matter how I set this thing up it will end up causing some problem that inveitably brings the whole server to its knees, killing it quietly whilst I go about my business. Now this isn’t news to anyone who’s read my blog for any length of time but it eerily coinciding with my main machine “forgetting” it’s main partition, leaving me with no website and a machine that refused to boot.

Realistically I’m a victim of my own doing since my main machine is getting a bit long in the tooth (almost 3 years now by my guess) but even before it hit the 6 month mark I was getting problems. Back then it was some extremely obscure issue that only seemed to crop up in certain games where I couldn’t get more than 30 seconds into playing them before the whole machine froze and repeatedly played the last second of sound until I pulled the plug on it. That turned out to be RAM requiring more volts than it said it did and everything seemed to run fine until I hit a string of hard drives that magically forgot partitions (yes, in much the same fashion as my current one did). Most recently it has taken to hating having all of its RAM slots filled even though both of them work fine in isolation. Maybe it’s time this bugger went the way of old yeller.

Usually a rebuild isn’t much of a hassle for someone like me. It’s a pain to be sure but the pay off at the end is a much leaner and meaner rig that runs everything faster than it did before. This time around however it also meant configuring my development environment again whilst also making sure that all my code didn’t suffer in the apparent partition failure. I’m glad to say that whilst it did kill a good couple hours I was otherwise planning to spend lazing about I have got everything functional again and no code was harmed in the exercise.

You might be wondering why the hell I’m bother to post this then since it’s so much of a non-event. Well for the most part it’s to satisfy that part of me that likes to blog every day (no matter how hard I try to quell him) but also it’s to make sure the whole thing is running again and that Google is aware that my site hasn’t completely disappeared. So for those of you who were expecting something more I’m deeply sorry, but until the new year comes along I’m not sure how much blogging I’m going to be doing. Let alone any well thought out pieces that I tend to hit at least a couple times a week 😉

Technical Difficulties.

It just goes to show that even if you think you’ve done everything right chances are there’s something small you missed. I got home yesterday and was greeted with not only a cat who demands attention immediately (or face the wrath of his continuous antics, that should make for an interesting post in the future) but also an Internet connection that refused to play ball. Queue an hour or so of troubleshooting and swapping between modems and routers and I finally thought I had it fixed. That was until around 2:45am this morning.

Turns out my modem thought it would be a spiffing time to reboot itself. This shouldn’t of caused more then about 5 minutes of downtime. However, due to my previous tinkering, several network settings had changed and my modem was now blissfully unaware of this. Turns out saving the config on this particular router only saves it into RAM, and any reboot will kill any settings not saved in the proper way. Needless to say I’ve fixed this issue and it shouldn’t happen again…..Well not for a while at least! 🙂

I guess it’s like I’ve said before, us IT guys have the most interesting computer problems and no matter how sure we are in what we’ve done they always find a way to make us look like quite the fools. I’ll make up today’s blog post with something interesting tomorrow, I promise! 😀

We Have the most Interesting Problems.

No matter what you do you’ve got to have a bit of pride in what you’re doing. I’d love to tell everyone that my sense of pride in my work comes from my long line of successful projects, which I will admit do give me a warm and fuzzy feeling, but more and more I think it comes down to this: Give me any IT system known to man, be it a personal computer or corporate infrastructure, and guaranteed I’ll find a problem that no one has ever seen before and won’t even try to explain.

This came up recently with our blade implementation I mentioned a while ago. Everything has been going great, with our whole environment able to run on a single blade comfortably. Whilst I was migrating everything across something happened that managed to knock one of our 2 blades offline. No worries I thought to myself, I had enabled HA on the farm so all the virtual machines would magically reappear. Not 2 minutes later did our other blade server drop off the network, taking all the (non-production, thank heavens) servers offline. After spending a lot of time on getting this up and running I was more than a little irked that it had developed a problem like this, but I endeavoured to find the cause.

That was about 2 weeks ago and I thought I had nipped it in the bud when I had found the machines responsible and modified their configuration so they’d behave. I was working on reconfiguring some network properties on one machine when I suddenly lost connection again. Knowing that this could happen I had made sure to move most of the servers off before attempting this so we didn’t lose our entire environment this time around. However what troubled me wasn’t the blade dropping off the network it was how I managed to trigger it (a bit of shop talk follows).

VMware’s hypervisor is supposed to abstract the physical hardware away from the guest operating system so that you can easily divvy it up and get more use out of a server. As such it’s pretty rare for a change from within a guest to affect the physical hardware. However when I was changing one network adapter within a guest from a static address (it was on a different subnet prior to migration) to DHCP I completely lost network connectivity to the guest and host. It seems that a funny combination of VMware, HP Blades and Windows TCP/IP stack contains a magic combination so that when you do what I did, the network stack on the VMware host gets corrupted (I’ve confirmed its not the VirtualConnect module or anything else, since I had virtual machines running in the same chassis on a different blade perfectly well).

I’ve struggled with similar things with my own personal computer for years. My current machine suffers from random BSODs that I’m sure are due to the motherboard which is unfortunately the only component I can’t easily replace. Every phone I had for the past 3 years suffered from one problem or another that would render it useless for extended periods of time. Because of this I’ve come to the conclusion that because I’m supposed to be an expert with technology I will inheritly get the worst problems.

It’s not all bad though. With problems like these comes experience. Just like my initial projects which ultimately failed to deliver (granted one of those was a project at University and the other one was woefully under resourced) I learnt what can go wrong where, and had to develop troubleshooting skills to cope with that. I don’t think I’d know a lot about technology today if I hadn’t had so many things break on me. It was this quote that summed it up so well for me:

I’ve missed more than 9,000 shots in my career. I’ve lost almost 300 games. 26 times I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life and that is why I succeed.

That quote was from Michael Jordan. A man who is constantly associated with success attributes it to his failures, something which I can attest to. It also speaks to the engineer in me, as with any engineering project the first implementation should never be the one delivered, as revising each implementation lets you learn where you made mistakes and correct them. There’s only so much you can learn from getting it right.

This still doesn’t stop me from wanting to thrash my computer for its dissent against me, however 🙂