Posts Tagged‘communications’

Li-Fi: 100 Times Faster, 100 Times Less Useful.

There are certain fundamental limitations when it comes to current wireless communications. Mostly it comes down to the bandwidth of the frequencies used as more devices come online the more congested they become. Simply changing frequencies isn’t enough to solve the problem however, especially when it comes to technology that’s as ubiquitous as wifi. This is what has driven many to look for alternative technologies, some looking to make the interference work for us whilst others are looking at doing away with radio frequencies entirely. Li-Fi is a proposed technology that uses light instead of RF to transmit data and, whilst it posits speeds up to 100 times faster than conventional wifi, I doubt it will ever become the wireless communication technology of choice.

1200x674xtimthumb.jpg.pagespeed.ic.Y6vFHn4CFR

Li-Fi utilizes standard light bulbs that are switched on and off in nanoseconds, too fast for the human eye to perceive any change in the output of the light. Whilst the lights need to remain in an on state in order to transmit data they are apparently able to still transmit when the light level is below that which the human eye can perceive. A direct line of sight isn’t required for the technology to work either as light reflected off walls was still able to produce a usable, albeit significantly reduced, data signal. The first commercial products were demonstrated sometime last year so the technology isn’t just a nice theory.

However such technology is severely limited by numerous factors. The biggest limitation is the fact that it can’t work without near or direct line of sight between the sender and receiver which means that a transmitter is required in every discrete room that you want to use your receiver in. This also means that whatever is feeding data into those transmitters, like say a cabled connection, also need to be present. Compared to a wifi endpoint, which usually just needs to be placed in a central location to work, this is a rather heavy requirement to satisfy.

Worse still this technology cannot work outside due to sunlight overpowering the signal. This likely also means that any indoor implementation would suffer greatly if there was sunlight entering the room. Thus the idea that Li-Fi would be 100 times faster than conventional wifi is likely just laboratory numbers and not representative of the real world performance.

The primary driver for technologies like these is convenience, something which Li-Fi simply can’t provide given its current limitations. Setting up a Li-Fi system won’t be as easy as screwing in a few new light bulbs, it will likely require some heavy investment in either cabling infrastructure or ethernet-over-power systems to support them. Compare this to any wifi endpoint which just needs one data connection to cover a large area (which can be set up in minutes) and I’m not sure customers will care how fast Li-Fi can be, especially if they also have to buy a new smartphone to use it.

I’m sure there will be some niche applications of this technology but past that I can’t really see it catching on. Faster speeds are always great but they’re all for naught if the limitations on their use are as severe as they are with Li-Fi. Realistically you can get pretty much the same effect with a wired connection and even then the most limiting factor is likely your Internet connection, not your interconnect. Of course I’m always open to being proved wrong on this but honestly I can’t see it happening.

What’s Worse Than a Filter? A Backdoor Curtosey of David Cameron.

Technological enablers aren’t good or evil, they simply exist to facilitate whatever purpose they were designed for. Of course we always aim to maximise the good they’re capable of whilst diminishing the bad, however changing their fundamental characteristics (which are often the sole purpose for their existence) in order to do so is, in my mind, abhorrent. This is why I think things like Internet filters and other solutions which hope to combat the bad parts of the Internet are a fool’s errand as they would seek to destroy the very thing they set out to improve. The latest instalment of which comes to us courtesy of David Cameron who is now seeking to have a sanctioned backdoor to all encrypted communications and to legislate against those who’d resist.

David Cameron Shifty Lookin Fella

Like most election waffle Cameron is strong on rhetoric but weak on substance but you can get the gist of it from this quote:

“I think we cannot allow modern forms of communication to be exempt from the ability, in extremis, with a warrant signed by the home secretary, to be exempt from being listened to.”

Essentially what he’s referring to is the fact that encrypted communications, the ones that are now routinely employed by consumer level applications like WhatsApp and iMessage, shouldn’t be allowed to exist without a method for intelligence agencies to tap into them. It’s not like these communications are exempt from being listened to currently just that it’s infeasible for the security agencies to decrypt them once they’ve got their hands on them. The problem that arises here though is that unlike other means of communication introducing a mechanism like this, a backdoor by which encrypted communications can be decrypted, this fundamentally breaks the utility of the service and introduces a whole slew of potential threats that will be exploited.

The crux of the matter stems from the trust relationships that are required for two way encrypted communications to work. For the most part you’re relying on the channel between both parties to be free from interference and monitoring from interfering parties. This is what allows corporations and governments to spread their networks over the vast reaches of the Internet as they can ensure that information passing through untrusted networks isn’t subject to prying eyes. Taking this proposal into mind any encrypted communications which pass through the UK’s networks could be intercepted, something which I’m sure a lot of corporations wouldn’t like to sign on for. This is not to mention the millions of regular people who rely on encrypted communications for their daily lives, like anyone who’s used Facebook or a secure banking site.

Indeed I believe the risks poses by introducing a backdoor into encrypted communications far outweighs any potential benefits that you’d care to mention. You see any backdoor into a system, no matter how well designed it is, will severely weaken the encrypted channel’s ability to resist intrusion from a malicious attacker. No matter which way you slice it you’re introducing another attack vector into the equation when there was, at most, 2 before you now have at least 3 (the 2 endpoints plus the backdoor). I don’t know about you but I’d rather not increase my risk of being compromised by 50% just because someone might’ve said plutonium on my private chats.

The idea speaks volumes to David Cameron’s lack of understanding of technology as whilst you might be able to get some commercial companies to comply with this you will have no way of stopping peer to peer encrypted communications using open source solutions. Simply put if the government, somehow, managed to get PGP to work a backdoor in it’d be a matter of days before it was no longer used and another solution worked into its place. Sure, you could attempt to prosecute all those people using illegal encryption, but they said the same thing about BitTorrent and I haven’t seen mass arrests yet.

It’s becoming painfully clear that the conservative governments of the world are simply lacking in fundamental understanding of how technology works and thus concoct solutions which simply won’t work in reality. There are far easier ways for them to get the data that they so desperately need (although I’m yet to see the merits of any of these mass surveillance networks) however they seem hell bent on getting it in the most retarded way possible. I would love to say that my generation would be different when they get into power but stupid seems to be an inheritable condition when it comes to conservative politics.

The Artemis pCell: Making Interference Work For You.

It will likely come as a shock to many to find out that Australia leads the world in terms of 4G speeds, edging out many other countries by a very healthy margin. As someone who’s a regular user of 4G for both business and pleasure I can attest to the fact that the speeds are phenomenal with many of the CBD areas around Australia giving me 10~20Mbps on a regular basis. However the speeds have notably degenerated over time as back in the early days it wasn’t unheard of to get double those speeds, even if you were on the fringes of reception. The primary factor in this is an increased user base and thus as the network becomes more loaded the bandwidth available to everyone starts to turn south.

There’s 2 factors at work here, both of which influence the amount of bandwidth that a device will be able to use. The primary one is the size of the backhaul pipe on the tower as that is the hard limit on how much traffic can pass through a particular end point. The second, and arguably just as important, factor is the number of devices vs the number of antennas on the base station as this will determine how much of the backhaul speed can be delivered to a specific device. This is what I believe has been mostly responsible for the reduction in 4G speeds I’ve experienced but according to the engineers at Artemis, a new communications start up founded by Steve Perlman (the guy behind the now defuct OnLive), that might not be the case forever.

Artemis pCell pWaveArtemis new system hopes to solve the latter part of the equation not by eliminating signal interference, that’s by definition impossible, but instead wants to utilize it in order to create pCells (personal cells) that are unique to each and every device that’s present on their network. According to Perlman this would allow an unlimited number of devices to coexist in the same area and yet still receive the same amount of signal and bandwidth as if they were on it all by themselves. Whilst he hasn’t divulged exactly how this is done yet he has revealed enough for us to get a good idea about how it functions and I have to say it’s quite impressive.

So the base stations you see in the above picture are only a small part of the equation, indeed from what I’ve read they’re not much different to a traditional base station under the hood. The magic comes in the form of the calculations that are done prior to the signal being sent out as instead of blindly broadcasting (like current cell towers do) they instead use your, and everyone else who is connected to the local pCell network, location to determine how the signals be sent out. This then manifests as a signal that’s coherent only at the location of your handset giving you the full amount of signal bandwidth regardless of how many other devices are nearby.

I did enough communications and signal processing at university to know something like this is possible (indeed it’s a similar kind of technology that powers “sound lasers”) and could well work in practice. The challenges facing this technology are many but from a technical standpoint there are 2 major ones I can see. Firstly it doesn’t solve the backhaul bandwidth issue meaning that there’s still an upper limit on how much data can be passed through a tower, regardless of how good the signal is. For a place like Australia this would be easily solved by implementing a full fibre network which, unfortunately, seems to be off the cards currently. The second problem is more nuanced and has to do with the calculations required and the potential impacts that might have on the network.

Creating these kinds of signals, ones that are only coherent at a specific location, requires a fair bit of  back end calculations to occur prior to being able to send the signal out. The more devices you have in any particular area the more challenging this becomes and the longer that this will take to calculate before the signal can be generated. This has the potential to introduce signal lag into the network, something that might be somewhat tolerable from a data perspective but is intolerable when it comes to voice transmission. To their credit Artemis acknowledges this challenge  and has stated that their system can do up to 100 devices currently so it will be very interesting to see if it can scale out like they believe it can.

Of course this all hinges on the incumbent cellular providers getting on board with this technology, something which a few have already said their aware of but haven’t gone much further than that. If it works as advertised then it’s definitely a disruptive technology, one that I believe should be adopted everywhere, but large companies tend to shy away from things like this which could strongly hamper adoption. Still this tech could have wide reaching applications outside the mobile arena as things like municpal wireless could also use it to their advantage. Whether it will see application there, or anywhere for that matter, will be something to watch out for.

 

The Space Internet and Intrasolar Communications.

The Internet as it stands today is the greatest revolution in the world of communications. It’s a technical marvel, enabling us to do many things that even up to a couple decades ago were firmly in the realms of science fiction. Indeed the incredible acceleration of technical innovation that we’ve experienced in recent history can be attributed to the wide reaching web that enables anyone to transmit information across the globe . So with the human race on the verge of a space revolution that could see a human presence reaching far out into our solar system a question burns away in the minds of those who’d venture forth.

How would we take the Internet with us?

As it stands currently the Internet is extremely unsuitable for inter-planetary communications, at least with our current level of technology. Primarily this is because the Internet is based off the back of the TCP/IP protocols which abstract away a lot of the messy parts of sending data across the globe. Unfortunately however due to the way these protocols are designed the transmission of data is somewhat unreliable as neither of the TCP/IP protocols make guarantees about when or how data will arrive at its destination. Here on Earth that’s not much of a problem since if there are any issues we can just simply request the data be sent again which can be done in fractions of a second. In space however the trade-offs that are made by the foundations of the Internet could cause immense problems, even at short distances like say from here to Mars.

Transmissions from Mars take approximately 3 minutes and 20 seconds to reach Earth since they travel at the speed of light. Such a delay is quite workable for scientific craft but for large data transfers it represents some very unique problems. For starters requesting that data be resent means that whatever system was relying on that data must wait a total of almost 7 minutes to continue what it was doing. This means unreliable protocols like the TCP/IP stack simply can not be used over distances like these when re-transmission of data is so costly and thus the Internet as it exists now can’t really reach any further than it already does. There is the possibility for something more radical, however.

For most space missions now the communication method of choice is usually a combination of proprietary protocols coupled with directed microwave communication. For most missions this works quite well, especially when you consider examples like Voyager which are 16 light hours from earth, however these systems don’t generalize very well since they’re usually designed with a specific mission in mind. Whilst an intrasolar internet would have to rely microwaves for its primary transmission method I believe that a network of satellites set up as anAldrin Cycler between the planets of our solar system could provide the needed infrastructure to make such a communications network possible.

In essence such satellites would be akin to the routers that power the Internet currently, with the main differences being that each satellite would verify the data in its entirety before forwarding it onto the next hop. Their primary function would also change depending on which part of the cycle they were in, with satellites close to a planet functioning as a downlink with the others functioning as relays. You could increase reliability by adding more satellites and they could easily be upgraded in orbit as part of missions that were heading to their destination planet, especially if they also housed a small space station. Such a network would also only have to operate between a planet and its two closest neighbors making it easy to expand to the outer reaches of the solar system.

The base stations on other planets and heavenly bodies would have to have massive caches that held a sizable portion of the Earth Internet to make it more usable. Whilst you couldn’t have real time updates like we’re used to here you could still get most of the utility of the Internet with nightly data uploads of the most updated content. You could even do bulk data uploads and downloads to the satellites when they were close to the other planets using higher bandwidth, shorter range communications that were then trickle fed over the link as the satellite made its way back to the other part of its cycle. This would be akin to bundling a whole bunch of tapes in a station wagon and sending it down the highway which could provide extremely high bandwidth, albeit at a huge latency. 

Such a network would not do away with the transmission delay problems but it would provide a reliable, Internet like link between Earth and other planets. I’m not the first to toy with this kind of idea either, NASA tested their Disruption Tolerant Networking back in 2008 which was a protocol that was designed with the troubles of space in mind. Their focus was primarily on augmenting future, potentially data intensive missions but it could be easily be extended to cover more generalized forms of communication. The simple fact that agencies like NASA are already well on their way to testing this idea means we’re already on our way to extending the Internet beyond its earthly confines, and it’s only a matter of time before it becomes a reality.

 

IT Career Paths.

I’m constantly amazed by the number of people who say they work in IT yet have very little to do with anything in the field (apart from doing their work on a computer). Admittedly most of these people are in management so saying that they’re “in IT” is about as applicable as them being “in field X” where X can be any industry where you need to organise a group of people with another group of people for a common goal. Still there’s quite a variety of career paths in IT and as far as the everyman goes most of them get lumped into the same area “guy who knows computers”. I thought it might be interesting to take you down the road of a couple career paths that I have been down and where I’ve seen them lead people over the past half a decade or so.

IT Support:

This is probably the career path that everyone is most familiar with, those guys who fix computers for a living. Landing a job in this area doesn’t require anything more than any other entry level job you might find around the place but you’ll usually end up in one of those dreaded call centers. Good news is that for anyone looking to break into IT there’s always going to be positions like these going as the turnover rate is quite high for entry level work, somewhere in the order of 30~50% for most places. Still if you can stick this out for a good year or two (depending on how skilled you are) there’s light at the end of the help desk tunnel.

Funnily enough the next “level” of IT support is just that, Level 2 Support. In essence you’ll be one of the behind the scenes guys who has more access and more knowledge about the systems the front line people are taking calls for and will be the one they come to for help. At this level you’ll probably be expected to start doing some outside learning about products that you (or your company) haven’t had any experience with yet, usually in the hopes to move you up into the next level. Second level guys are usually not responsible for adding new things to the environment and are best suited to being support to the first level and being the conduit to the next level guys.

The final incarnation of the IT support person is usually referred to as Level 3 Support or Technology Specialist. After spending a couple years at the second level most people will have gained a significant amount of skills in troubleshooting various software and hardware issues and hopefully acquired some certifications in various technologies. At this point there are a couple options open to such people: continue down the support line (generalist) or focus on a specific technology (specialist). Both of these have their advantages as the generalist won’t have trouble finding a job in almost any organisation and the specialists will attract quite high salaries for their specified skill set. Generally most people become a generalist first for a year or so while they work out what they want to build their career on.

This is the level I’m currently at and I initially tried to specialize in virtualization and Storage Array Networks (SANs) however my current position uses neither of these skills. It’s a good and bad thing as whilst I’m learning about a whole lot of new technologies (like Hyper-V) my specialist skills go unused. In all honesty though my most valuable skills as an engineer have gone for the most part un-used since I got my degree back at the end of 2006 so it’s really not that suprising and traditionally I’ve found that the ability to quickly adapt to the requirements of your employer seems to land me more jobs than my skills in one area.

They did help me get my foot in the door though 😉

Developer:

Behind those who support the things you’re viewing this web page on are those who actually built the software that it runs on. In a general sense these guys are referred to as developers and there’s quite a few different types ranging from your more traditional desktop application programmers to the current rock stars of the programming world the web programmers.

Starting off a career in programming isn’t as easy as IT support. For the most part you’ll have to have some level of academic experience in the field before most places will give you a second look. Most programmers will have done a bachelor degree in either Computer Science or Software Engineering (or Engineering in Software Engineering for those true engineers) with a few starlets from the generic IT degrees making their way into the entry level programmer ranks. Junior programming jobs are a bit harder to come across but there’s usually good opportunities to be had in smaller firms who will help nuture you past this first hurdle.

Senior developers are someone who’s had a demonstratable amount of experience in either building systems of a certain type or in a certain language. They’re much like the second level of IT support as they’re usually responsible for helping the juniors out whilst working on the harder problems that their underlings would be unable to do. Again at this level there’s some expectation of training to be done in order to sharpen your skills up to match that of what your employer requires and this is the time when they should look to specializing.

Developers don’t technically have a third level like IT support however once they’re past the junior level specializing in one kind of development (say SAP customizations) becomes far too lucrative to pass up. There’s varying levels of specialisation available and this is when many people will make the jump into a field they’re interested in, say games or web, that demands a certain level of experience before taking them on.

I never got past the junior developer level mostly because I jumped into a System Administrator position before I had the chance to develop my programming career any further. I’ve kept my skills sharp though through creating automation scripts and various programs that served specific purposes but none so much as my current pet project Geon. I don’t think I’ll ever develop for anyone though as the last large project I worked on was more clerical admin work than actual programming.

Communications:

Whilst not terribly distinct from the IT support career path those in the business of providing networks and communications links for the varying computer systems they deserve their own mention as their technology predates the first real computer by over 70 years. Ostensibly they will spend most of their career using computers but only to administer the communication technology they’re responsible for.

At the heart of the career path is the same 3 levels with the first level being an almost identical help desk hell. However instead of working on the computer systems that you know and love they work on the cables and interconnects that keep the information flowing around the world. The number of jobs available is heavily dependant on which brand of network devices you choose to base your career around with the largest one currently being CISCO. Specialisations tend even further down the telecommunications path with most of them either being things like CISCO Certified Internetwork Expert (with a test that has an 80% fail rate on the first try) or something like a PABX/VoIP (basically telephones) expert.

I have a minimum amount of knowledge in this area as I skipped out on my college’s computer networking course and found my career in IT support much easier 🙂

System Design:

I’ve struggled to find people who understand the term Business Analyst but don’t work in IT. In essence these people are the interface between the real world who want some kind of computer based system and those of us who have the skills to provide them. This is yet another position which usually requires some form of academic accreditation before anyone will take you seriously, and even then some people might feel like you’re still getting in their way.

People employed as business analysts are probably the most removed from actual IT whilst still being counted as part of it. There’s very little technical experience required to become one but you do have to have a keen eye for identifying what people want, managing their expectations as well as acting as a glorified telephone between the everyman and the IT nerds. Interestingly enough this is one of the areas of IT where a healthy percentage of the employees are women, something that is quite rare in the world of IT.

The next step for business analyists is usually that of what is wrongly referred to as an Architect. These are the people who are responsible for setting out a strategic direction for whole systems and whose work is usually of a fairly high level. Traditionally these kinds of people work side by side with project managers to organise various resources in order to deliver their vision but that’s where the tenuous relationship to real architects ends. In fact its more common to find third level IT support people graduate to the architect position thanks to their grass roots level experience in delivering systems that were set out by architects for them.

I’ve worked with a few architects and for the most part they’re worth the top dollars they’re paid. The ones that weren’t just simply didn’t communicate with their experts and promised things that just weren’t possible.

Sales/Consulting:

Once you’ve reached a certain point in any of the previous career paths I’ve mentioned there’s always an option to switch over to the sales side of IT. Whilst this position isn’t highly suited to many who join the ranks of IT (high levels of social interaction? Say it ain’t so!) I’ve known more than a few who made the jump mostly because of the money and travel opportunities it provides.

For those who come directly from IT they’re usually placed into what’s called a Pre-Sales role. Rather than actually selling anything directly they’re responsible for getting into the client’s environment and working out what they need, much like a business analyst. They’ll then draw up a bill of materials for the system and then hand it off to their sales team to close the deal. The reason pure IT people are attracted to these kinds of positions is that you’re still required to have a high level of knowledge about certain systems but don’t have to be involved in their support, which can be quite refreshing after many years of fixing someone else’s problems.

For the softer IT career choices there’s the option of becoming a consultant or basically a gun for hire. Once you’ve achieved a high level of specialization it becomes profitable to work either freelance or part of a larger consulting group who will hire you to clients who have very specific requirements. Usually consultants are used in order to get an outside opinion on something or to analyse a certain system or process. It’s quite lucrative as there’s little overheads past what your basic entry level employee has, but the going rates for their time are almost an order of magnitude higher.

There are of course many more ancillary positions in IT but with this post dragging on a bit I thought I would leave it there. In essence I wanted to convey the breadth of careers that IT offers to people and how far away from computers you can be yet still be “in IT”. Maybe next time you’ll think twice before asking your friend in IT to fix your computer 😉