Posts Tagged‘information’

Information Might Not be Lost to Black Holes, Hawking Posits.

Black holes are a never ending source of scientific intrigue. They form when a star of appropriate mass, approximately 5 to 10 times the mass of our own star, reaches the end of its life and begins to fuse heavier and heavier elements. At this stage the outward pressure exerted by those fusion reactions cannot overcome the gravity from its mass and it slowly begins to collapse inwards. Eventually, in a calamitous event known as a supernova, it shrinks down to a point mass of infinite density and nothing, not even light, can escape its gravitational bounds. Properties like that mean black holes do very strange things, most of which aren’t explained adequately by current models of our universe. One such thing is called the Information Paradox which has perplexed scientists for as long as the idea as black holes has been around.

1.13743-C0141244-Black_hole_artwork-SPL-1

The paradox stems from the interaction between general relativity (Einstein’s description of gravity as a property of spacetime) and quantum mechanics (the processes that affect atoms, photons and other particles). Their interaction suggests that physical information about anything that crosses the black hole’s event horizon could be destroyed. The problem with this is that it violates the generally held idea that if we have information about a system in one point in time we should be able to determine its state at any point in time. Put simply it means that, when you’re looking at a black hole, if something falls into it you have no way of determining when that happened because the information is destroyed.

However renown physicist Stephen Hawking, whose work on black holes is so notable that one feature of them (Hawking Radiation) is named after him, has theorized that the information might not be lost at all. Instead of the information being lost or stored within the black hole itself Hawking states that the information is stored as a super-translation (or a hologram, a 2D representation of 3D data) in the event horizon. Whilst for all practical purposes this means that the information is lost, I.E. you likely wouldn’t be able to reconstruct the system state prior to the particles crossing the event horizon, it would solve the paradox.

The idea might not be as radical as you first think as other researchers in the area, like Gerard t’Hooft (who was present at the conference where Hawking presented this idea), have been exploring similar ideas in the same vein. There’s definitely a lot of research to be done in this area, mostly to see whether or not the idea can be supported by current models or whether it warrants fundamental changes. If the idea holds up to further scrutiny then it’ll solve one of the most perplexing properties of black holes but there are still many more that await.

PowerPoint Doesn’t Need to Die, Bad Communicators Do.

Ah PowerPoint, the thing that everyone seems to loathe when they walk into a meeting yet still, when it comes time for them to present something, it’s the first tool they look to for getting their idea across. Indeed in my professional career I’ve spent many hours standing in front of a projection screen, the wall behind me illuminated by slide after slide of information I was hoping to convey to my audience, jabbering on about what the words behind me meant. It seems that every year there’s someone calling for the death of the defacto presentation tool with them lamenting its use in many well publicised scandals and failures. However like the poor workman who blames his tools PowerPoint is not responsible for much of the ills aimed at it. That, unfortunately, lies with the people who use it.

Hook_web_slideshow__0000_Death

PowerPoint, like every Microsoft Office product, when put in the hands of the masses ends up being used in ways that it never should have been. This does not necessarily mean the tool is bad, indeed I’d like to see a valid argument for the death of say Word given the grave misuses it has been put to, more that it was likely not the most appropriate medium for the message it was trying to convey or the audience it was presented to. When used in its most appropriate setting, which I contend is as a sort of public prompt card for both the speaker and the audience, PowerPoint works exceptionally well for conveying ideas and concepts. What it’s not great at doing is presenting complex data in a readily digestible format.

But then again there are very few tools that can.

You see many of the grave misgivings that have been attributed to PowerPoint are the result of its users attempting to cram an inordinate amount of information into a single panel, hoping that it somehow all makes its way across to the audience. PowerPoint, on its own, simply does not have the capability to distill information down in that matter and as such relies on the user’s ability to do that. If the user then lacks the ability to do that both coherent and accurately then the result will, obviously, not be usable. There’s no real easy solution to this as creating infographics that convey real information in a digestible format is a world unto itself but blaming the tool for the ills of its users, and thus calling for the banning of its use, seems awfully shortsighted.

Indeed if it was not for PowerPoint then it would be another one of the Microsoft Office suite that would be met with the same derision as they all have the capability to display information in some capacity, just not in the format that most presentations follow. Every time people have lamented PowerPoint to me I’ve asked them to suggest an alternative tool that solves the issues they speak of and every time I have not recieved a satisfactory answer. The fact of the matter is that, as a presentation tool, PowerPoint is one of the top in its class and that’s why so many turn to it. The fact that it’s found at the center of a lot of well publicised problems isn’t because of its problematic use, just that it’s the most popular tool to use.

What really needs to improve is the way in which take intricate and complex data and distill that down to its essence for imparting it on others. This is an incredibly wide and diverse problem space, one that entire companies have founded their business models on. It is not something that we pin on a simple presentation tool, it is a fundamental shift away from thinking that complex ideas can be summed up in a handful words and a couple pretty pictures. Should we want to impart knowledge upon someone else then it is up to us to take them on that journey, crafting an experience that leaves them with enough information for them to be able to impart that idea on someone else. If you’re not capable of doing nor PowerPoint nor any other piece of software will help you.

The Time Delta From Strange to Commonplace.

New technology always seems to border on the edge of being weird or creepy. Back in the 1970s and 80s it was weird to be into games, locking yourself away for hours at a time in a darkened room staring at a glowing screen. Then the children (and adults) of that time grew up and suddenly spending your leisure time doing something other than watching TV or reading a book became an acceptable activity. This trend has been seen occurring more recently with the advent of social networks and smartphones with people now divulging information onto public forums at a rate that would’ve made the 1990s versions of them blush. What I’ve come to notice is that the time period between something being weird or creepy to becoming acceptable is becoming smaller, and the rate at which its shrinking is accelerating.

Mercedes-Benz-F-015-Car-Wallpaper

The smartphone which you now carry with you everywhere is a constant source of things that were once considered on the borderline of acceptable but are now part of your life. Features like Google Now and Siri have their digital fingers through all your data, combing it for various bits of useful information that it can whip up into its slick interface. When these first came out everyone was apprehensive about them, I mean the fact that Google could pick up on travel itineraries and then display your flight times was downright spooky for some, but here we are a year or so later and features like that aren’t so weird anymore, hell they’re even expected.

The factor that appears to melt down barriers for us consumers is convenience. If a feature or product borders on the edge of being creepy but provides us with a level of convenience we couldn’t have otherwise we seem to have a very easy time accommodating it. Take for instance Disney’s new MyMagic Band which you program with your itinerary, preferences and food choices before you arrive at one of their amusement parks. Sure it might be a little weird to walk into a restaurant without having to order or pay, or walking up to rides and bypassing the queue, but you probably won’t be thinking about how weird that is when you’re in the thick of it. Indeed things like MyMagic break down barriers that would otherwise impact on the experience and thus, they work themselves easily into what we deem as acceptable.

The same can be said for self driving cars. Whilst techno junkies like myself can’t wait for the day when taking the wheel to go somewhere is optional the wider public is far more weary of what the implications of self-driving cars will be. This is why many companies have decided not to release a fully fledged vehicle first, instead opting to slowly incorporate pieces of the technology into their cars to see what customers react positively first. You’ll know these features as things like automatic emergency braking, lane assist and smart cruise control. All of these features are things you’d find in a fully fledged self driving car but instead of being some kind of voodoo magic they’re essentially just augments to things you’re already used to. In fact some of these systems are good enough that cars can self drive themselves in certain situations, although it’s probably not advised to do what this guy does.

Measuring the time difference between cultural shifts is tricky, they can really only be done in retrospect, but I feel the general idea that the time from weird to accepted has been accelerating. Primarily this is reflection in the acceleration of the pace of innovation where technological leaps that took decades now take place in mere years. Thus we’re far more accepting of change happening at such a rapid pace and it doesn’t take long for one feature, which was once considered borderline, to quickly seem passe. This is also a byproduct of how the majority of information is consumed now, with novelty and immediacy held above most other attributes. When this is all combined we become primed to accept changes at a greater rate which produces a positive feedback loop that drives technology and innovation faster.

What this means, for me at least, is that the information driven future that we’re currently hurtling towards might look scary on the surface however it will likely be far less worrisome when it finally arrives. There are still good conversations to be had around privacy and how corporations and governments handle our data, but past that the innovations that happen because of that are likely to be accepted much faster than anyone currently predicts. That is if they adhere to the core tenet of providing value and convenience for the end user as should a product neglect that it will fast find itself in the realm of obsolescence.

Norton Internet Security 2011: My How Things Have Changed.

It’s been a long time since I used a Norton product. Way back when I had just started working for Dick Smith Electronics I can remember happily recommending their products to nearly every customer that walked through the door and rarely did I get any complaints back from them. That all changed when I moved onto actually fixing people’s computers where upon I discovered that Norton’s latest incarnation (then 2004) was actually worse than the problems it was trying to solve. So many times I’d fully clean up a PC only to have it bog down again when I put Norton back on so you can imagine my scepticism when I was approached to review their latest version, Norton Internet Security 2011. Still I thought that they couldn’t have continued on if their product range continued down the path they had all those years ago so I decided to give it a go to see how far (or not) they had come.

Still I wasn’t entirely ready to risk my main machine with this so I fired up a Windows 7 virtual machine on my server and began the installation process on it. Installing Norton took just under 10 minutes, including the time it took to download the updates. Interestingly the installer updated itself before attempting to install on my system which is definitely a welcome change from updating afterwards. Doing so before installation means that Norton should be capable of detecting threats that might try to subvert the installation process, if you’re trying to clean an already compromised system. Unfortunately before the install will complete you have to provide your registration key, meaning there’s no free trial should you want to give your friends the software to trial before they buy it. Still the retail copy allows you to protect up to 3 PCs for the one purchase, enough to cover most households. Part of the installation process will also ask if you want to participate in the Norton Community which I’d definitely recommend you do (more on this later).

The user interface is a worlds away from the Norton that I remembered. The main screen is very well laid out with all the needed features available right on the main screen, I rarely had to dig more than one or two layers deep to find a setting I was looking for. The map at the bottom of the screen shows the recent cyber crime incidents across the world (although how they define this is a bit of a mystery) and is pretty cool to watch as ticks slowly over the past 24 hours. By itself though it doesn’t really add much value for the regular user apart from possibly piquing their curiosity about the events.

At this point a regular user could close the program and leave it at that since everything else is taken care of automatically by Norton Internet Security. This was why I used to recommend Norton products to people as they required the least amount of intervention from users to ensure that they kept working as intended. For the super and power users however there’s a fair bit more value that can be unlocked if you want to go digging a little deeper into Norton Internet Security, as I’ll show you below.

Before I get into the guts of this program let me talk about the performance of this application. Talk to any long time Windows administrator and they’ll tell you that anti-virus programs can be some of the most performance degrading applications you can install on your PC. This isn’t through any fault of their own, more it’s because to provide the maximum level of security they have to be constantly active, ensuring they’re ready for any incoming threats. Norton used to be the worst of the lot in this regard often bringing top of the line equipment to its knees in order to keep it safe.

Norton Internet Security 2011 however has progressed quite significantly since my encounters with its previous incarnations. Keen readers would’ve noticed that the main screen of Norton had a Performance link on it which reveals the screen shown above. The period shown before the two large spikes was completely idle and you can see that Norton does a good job of keeping its resource usage low during these periods. The two large spikes are from me performing a scan across about 600GB of data and doing that will use up most of your available system resources whilst the scan is running its course. This isn’t unique to Norton however and the scanning itself was quite quick, taking just under an hour to complete. The System Insight section provides an overview of what has been happening on your system over the past month. For an administrator like me such information can be quite valuable especially when trying to diagnose when some problem may have originated.

The meat of any AV program however is in its ability to catch potential problems before they can do any harm, which Norton Internet Security seems quite capable of doing.

The EICAR file is a virus test file designed to trigger any AV product. Upon downloading it I was greeted with a little pop up in my browser that said it was scanning the file for viruses and not too long after I was presented with this. As you can see not only does Norton identify the file and remove it before it has a chance to inflict any damage it also provides a wealth of information about the potential threat it removed from your system. This is where the power of the Norton Community comes in as it provides you with some idea about how widespread a threat might be and what it might do to your system if it was infiltrated. This kind of information is great for empowering users making them aware of what’s happening and hopefully educating them to avoid such things in the future. Most users probably won’t take advantage of this but it’s still quite useful for power users or system administrators.

The feature even extends to running processes which becomes quite handy for something you might be suspicious of but aren’t quite sure about. Again this kind of information might not be particularly useful to the user directly but it could prove quite valuable to administrators or super users attempting to troubleshoot issues.

The second feature set is the network protection section which encompasses two interesting features: Vulnerability Protection and the Network Security Map.

Vulnerability protection is an interesting idea. In essence Norton Internet Security can protect against flaws in particular programs, preventing the exploit from working. Whilst the vast majority of these exploits have been patched not all users are rigorous with their updates and Norton can help cover the gap for them. Additionally this also allows Norton to respond to threats quite quickly, nullifying their effects whilst the software vendors work on releasing a patch. Since there’s usually a month between patch cycles this feature goes a long way to securing a user against imminent threats that they might not even be aware of.

The network security map gives you a broad overview of the network you’re on and the other devices connected to it. This kind of thing can be helpful for users who are on public internet connections and want to be sure that their safe. Whilst this can’t detect any of the advanced threats (like a compromised access point running a man in the middle attack) it does give the users some much needed guidance on when they should and shouldn’t be doing things over a public connection. The information on other hosts is interesting too as its basically an IP and port scanner. Normal users probably won’t care about the information contained in here but after the hassle I went through to spoof a MAC address for free wifi in Los Angeles this kind of thing is quite valuable (if for all the wrong reasons ;)).

Lastly there’s the Web Protection section which contains an identity safe, credit card store and a parental controls section. Whilst there are already many password saving solutions out there the fact that Norton includes one is a good step towards improving a user’s security. Using a password store means that should you be compromised with a keylogger a malicious attacker won’t be able to get ahold of your passwords when you type them in. Sure there’s the possibility they’ll crack the store but it’s another layer of security that can help reduce the impact of a compromised system. The same can be said for the credit card store as whilst credit card details are one of the few things you don’t want to store anywhere on your computer the use of this store provides similar benefits to that of the password safe.

I didn’t get into the parental controls section much as it was very much geared towards fretting parents who require fine grained control over their child’s online experience. It provides all the useful goodies of being able to see what you’re kids are doing online and creating rule sets for browsing but probably the most useful part of it would be the online resources for educating children on safe web behaviour. Personally I’m a fan of keeping the PCs in a communal area and being an active online participant yourself instead of trying to approach the problem at arms length with tools like this. Still it wouldn’t be in the product if the users hadn’t been begging for it so I’m sure many users will appreciate its inclusion.

To be honest I went into this review with a great deal of scepticism, thinking that Norton wouldn’t have changed their sinful ways despite their continued existence. I’m glad to say that my experience with their latest product, Norton Internet Security 2011, changed all that and they’ve delivered a program I wouldn’t hesitate to recommend and use myself. Harnessing the power of their large user base in order to empower them with the information they gather is an excellent way to improve security and for power users like me it’s something that will give me just that little bit of an edge when dealing with unknown issues. Before I reviewed this product I didn’t think I’d need to pay for anti-virus ever again as things like Microsoft Security Essentials covered all the required functionality. Now however I can now see the vast difference between a paid product like this and their free cousins and I couldn’t bring myself to say that buying Norton Internet Security would be money wasted any more. If you’re looking for a paid anti-virus product with a wealth of features you wouldn’t go wrong with Norton Internet Security 2011.

Norton Internet Security 2011 is available from most software stores and online for AU$69.99. A copy of this software was provided to me free of charge for the purposes of reviewing it. All testing was conducted on a Windows 7 virtual machine running on VMware ESXi with 2 vCPUs, 2GB RAM and a 40GB HDD.

Ah, I Love The Smell of Competition in The Morning.

I don’t think I’ve gone a few weeks without having to catch my heart in my throat when I see a new web service that closely resembles Geon. A year ago I had the confidence that no one was doing anything like what I was thinking of and as such the target audience I was going after was all mine. More recently though there have been a few services that began to encroach on my territory but for the most part they were far enough away that I could write them off as filling a different niche. This morning however saw a product come out that is basically identical to the core concept of Geon’s idea of “What’s going on there?” so you’d think I’d be sitting here wringing my hands with worry.

The thing is though I’m chomping at the bit to beat them at their own game.

This isn’t the first time a location based communication app has managed to cross my path. The first such one was called BlockChalk, an interesting idea about leaving messages around your block for other people to find. They appear to be quite mature as well, their iPhone app is solid and they even have an API that’s pretty open. I initially stumbled across them in my first search for data feeds that had geo meta data in them and almost lost it when I started browsing the service. Still they’ve been around for a while and they didn’t appear to be garnering a lot of trafficor media attention nor did they have some of the capabilities that I was planning to integrate into Geon. They’re on my watch list (especially considering the talent they’ve managed to rake in) but in reality they just proved that there was a market for something like what I was developing, always a good sign.

This morning however saw the launch of a new application called Qilroy and I’ll be damned if these guys aren’t right up my alley:

Qilroy, a Qualcomm Service Labs-incubated project, launches today as a platform that groups tweets and other status updates by location. Like “calling a payphone at the mall,”Qilroy introduces a concept called peer-to-place communication, which enables multi-platform conversations to take place from anywhere in the world.The name is a Qualcomm take-off of “Kilroy Was Hereand the service lets users share their location with others and also see a visual of all the conversations happening around any location. Users can type in any zip code or place like “The Eiffel Tower” or “Athens, Greece” for instance and interact through the Qilroy platform, Facebook or Twitter with anyone in that location who is sending open updates from Twitter, Foursquare or Gowalla.

Aggregating information feeds based on location? Allowing users to post messages to a location? Yep either this is a case of finding independent inspiration or someone has been reading my blog over the past couple years and implemented the idea quicker than I could. I’m tending towards the former though as the service shares many core principles that I’ve discussed on this blog previously but there are several differences that separate us. Most notably they’re looking a lot like Twitter, opting to farm out the additional services (like picture hosting) to others in order to keep their service simple. They’ve also made the smart move of letting you start conversations through other mediums in Qilroy which will break down the initial barrier of getting a user to install yet another application. I’d say it’s a decent attempt at the location based communication idea (despite its launch day woes) and I can see people using it.

But don’t think that means I’m giving up on Geon. In fact this has made me more convinced than ever that I’m onto something, and that it’s the best out of the lot.

I’ve been keeping the latest version of Geon on the down low for a while now, alluding to the fact that I had completely dumped the last design (there’s a picture of it somewhere on this site, see if you can find it!) and codebase in favour of revamping it with a focus on the core idea of finding out what’s going on at a certain location. It’s come along quite well with many features that I’d put off for a long time now in the application and functioning as expected. In fact the web client is almost complete at a core level meaning that I’ll be working on the iPhone application in the next week or so with a private beta to follow shortly after. All I really want to say at this point is that whilst I may have solid competition in the form of BlockChalk and Qilroy I know can beat them at their own game. Their presence confirms that my idea has a tangible market and that only motivates me to do more.

So my competitors, even though I know you probably won’t see this post until long after I’ve launched my application hear this: I’m gunning for you. I might not be the best developer, best business manager or best anything out there but I’m determined to build this product that’s been rattling around in my head for almost two years. Anyone who knows me will tell you that if I’m determined to get something done it will happen, by hook or by crook and I’ll be damned if anyone other than me becomes the king of this location space.

Game on.

Shouldn’t Information Transcend Formats?

I just don’t get books. There’s something inherently anti-social about picking one up and plonking yourself down to read a couple chapters as you’re publicly announcing “I’m doing something and I shouldn’t be disturbed”. Still the act of sharing that anti-social experience can be quite social as I’ve had many great experiences discussing the few books that I’ve read over my lifetime. Still I struggle to get through dead trees even when I make an active effort to get through them. My latest victim, The Four Hour Work Week, has been in my backpack for the past 6 months and the last 5 of that have been with around 100 pages to go. For some reason I just can’t be bothered with sitting down and slogging through page after page of the centuries old medium, but that doesn’t mean I don’t crave their content.

After I went through a long time of having not a whole lot to do whilst I was at work I discovered the wonderful world of RSS feeds. Gone was my endless list of poorly organised bookmarks and in its place was a lovely unified view of all those websites I loved to frequent. After fiddling around with a couple installed RSS readers I eventually turned to Google Reader and I haven’t looked back since. Every day I can spin through a couple hundred articles in quick succession with the better ones usually inspiring a blog post or two. I’d say that on average I read about 2~3 books worth of online content per week, possibly double or triple that if I’m elbow deep in research for a particular problem.

So the question remains, why don’t I get books? I know I have a pretty insatiable hunger for information on various subjects and the bite sized chunks I get online, whilst very well suited to my almost permanently Internet connected life, are usually too small to get a decent understanding of something. Additionally I remember one of my college English teachers telling me that my generation was apparently the last one that would have any respect for the medium as the generations who followed us would get all their information from online sources. Whilst I don’t agree with her vision completely (thanks in part to the whole Twilight phenomenon, I mean they did read the books right?) it does seem that when it comes to getting information on a particular subject I don’t even think about visiting a library, let alone picking up a book.

The answer then is most likely one of convenience.  I can, on any device capable of browsing the Internet, open up a page with a dedicated stream of information tailored exactly to my interests. Books on the other hand are usually only aimed at one subject and unfortunately require me to carry them with me when I want to read them. I thought the answer would lie in eBooks but unfortunately they seem to suffer the same fate as their dead tree companions. You could probably put this down to a short attention span when it comes to absorbing information as all online content is aimed at being consumed in less than 5 minutes and trying to read a book like that just doesn’t seem to work for me (or anyone else I’ve seen read books for that matter).

There are some notable exceptions though. Way back in the middle of my time at university a good friend of mine handed me a copy of the first book in the Night’s Dawn trilogy by Peter F. Hamilton. After sharing a love for the revamped Battlestar Galactica he handed me the book saying that if I liked that kind of sci-fi, I’d love this. I hadn’t read an entire book in well over 3 years so initially I struggled to get into it. The entire trilogy took me a year to read but I savoured every last word of it, often stealing an hour away from my classes to sit on the university concourse to bathe in the warm summer sun whilst my mind was firmly planted in this epic space opera. I have yet to be that captivated by a book again as my last attempt to read another of Hamilton’s other works had me 20 pages in before I was told I was reading the wrong book in the trilogy (that’s the last time I trust you, Dave).

Maybe as I get more time to myself I’ll find the time for books. Right now though my life is filled with so many other activities that getting through a book always feels like a chore that doesn’t get me very far as it doesn’t usually satisfy a pressing want or need that I have at the time. With most of my subsequent free time spent playing through an enormous backlog of games (which just spurred an idea for a post tomorrow, stay tuned! 😉 ) books are one of those things that I’ll let slip by the wayside. Watching them rush past as the torrent of the Internet sweeps them away.

Information Overload.

Way back in the days when the Internet was only a trickle into Australia I remember the information available being sparse and unreliable. Many teachers would not accept any information from a website as part of research for a school assignment and rightly so, there was little if anyway to verify that information. The exercise was then left to us to read through countless books in order to back up any statement or opinion we might but forward. Today however the Internet is bristling with information and authoritative sources are popping up all over the place. The interesting about this is that due to the sheer volume of information that’s available you can be almost guaranteed to find some article or news piece that agrees with what you say, which has lead me into a very confounding train of thought.

I’ll take something that I know well just as an example: the economy. Now I’ve made my stance known about this in the past and the data seems to be on my side. For the most part Australia is narrowly avoiding a recession due to our banks being well capitalized and a government not afraid of going into debt to spur the economy on. However I could easily argue the opposite, and in fact a lot of people are. Just to show you how crazy the situation is take for instance these two articles. Both written at the same time but both decreeing completely different viewpoints. These aren’t the only examples either, and it is quite easy to make your point using what appears to be authoritative sources. This then begs the question, is there really a correct answer for this?

The truth is often in the middle of two dissenting viewpoints, especially when it comes to issue that can’t have a definitive answer such as the economy. However due to the volume of information it becomes easy for one side to write off the others since they appear to have so much support for their side of the argument. This unfortunately leads phenomena best described as wikiality, or truth by majority vote. In the end it probably won’t matter that you have the majority of data on your side because if you’re in the minority, when debating using the information available on the Internet, you will eventually be “proven” wrong. It is an unfortunate consequence of this information overload.

There has been a lot of work done over the past decade to create authoritative information sources however with the advent of easy access to the Internet and its publishing capabilities they are soon lost in the noise. I often try to link onto articles from these sources in order to promote them but I can’t say that I’m innocent in this regard either. All too often I link to Wikipedia hoping that people will scroll to the bottom to read actual articles from proper sources but I know that’s not usually the case. Overall Wikipedia is a good source, however the mentality of wikiality makes some articles unusable, and it can be hard to tell them apart.

To be honest though we’re better off with having too much information than not enough. There is enough information out there for anyone to be able to make up their own mind on pretty much any issue that comes up. It is regrettable that the noise is so high but that is the price we pay for the ultimate freedom of allowing anyone instant access to both read and publish limitless information.

Arrrrrgggghhhh the cognitive dissonance! 😉

Twitter: Information With a Short Lifetime.

Last night during my weekly drinks with friends the topic of Twitter came up. Whilst I can’t profess to being one of those ahead of the trend hipsters who were into Twitter before it was cool I did manage to find a good use for it to be part of the glue logic between this blog and my Facebook page. For a while I was content that I was using Twitter in a way that kept me at arms length from the twidiots but still gave me some value. However more recently I’ve come to use it quite a lot more as a place to get information that I don’t typically find elsewhere and my conversations last night showed how little value this information has outside of personal consumption.

The first thing that pops into people’s heads when you mention Twitter is a universe filled with crazy people shouting random meaningless messages at each other in the hopes that someone will listen to them or that what they’re saying has some importance. It’s the same idea that first came about when blogging was first introduced so it’s no surprise that the new way of blogging (“micro-blogging“) suffers from the same initial teething problems that its forefather did. What of course happened afterwards was also identical to what happened to blogging, famous people started using it. Once it became cool to be on Twitter everyone and his dog (and even his taser rifle) hopped on as well. Today Twitter is one of the top 20 most visited sites on the Internet and because of this volume it has started to show some value.

I follow about 20 odd people on Twitter. Most of them are friends or people who I find interesting. Usually the things I find on there are the kind of thing your work colleague would call you over to have a look at when they find something interesting or cool and as such their value outside that area is limited. A great example of this was Adam Savage of Mythbusters fame posted up a pic of Kari Byron and her daughter Ruby. For someone like me who follows the show religiously this was a pretty cool thing to see, however outside of actually seeing it directly on Twitter that information has limited value. This became ever so clear last night when I was trying to describe what I use Twitter for to someone else and used that as an example. They had watched the show but really didn’t know who I was on about. It would then seem that such information is really limited in usefulness to those who are directly involved in consuming it.

Additionally due to Twitter’s encouragement of short but frequent information bursts the usefulness of any information conveyed through this medium is time limited. Rarely would anyone hop on Twitter and read through a week’s backlog of tweets and in fact unless you use a third party Twitter app you’re going to be loading a set number of tweets at a time. I know with following just 20ish people I can find myself sifting through 3 or 4 pages of tweets just for one day. As such information published through this medium will not live long, as soon it will be buried under the torrent of the next big Twitter twitch.

What I believe Twitter and its related micro-blogging services can become is a bridge between the intial event of breaking news or release of information and the reporting done by traditional sources. Whilst there are some outlets experimenting with the Twitter platform currently the array of different approaches taken shows that this kind of service still lacks the maturity of traditional information sources. With the advent of something like Google Wave I can see this kind of pre-news idea gaining more ground, but right now it seems like the medium is limited to interesting but limited lifetime information.

And with any blog post about Twitter here’s my chance to pimp my profile and beg you all to follow me 😉

Geon 1.1 Update.

It’s that time again! I’ve updated my Geon application to version 1.1 and this brings along with it a UI change, a shift in focus for some things and of course new features. This time around though I thought I’d give you a walk through of what Geon can do and how you can go about using it. This will also give me a chance to explain away any problems that you’ll see, since this is still technically what I’d call a beta (because it’s far from feature complete).

Opening up the Geon page will greet you with a slightly more usable interface than the previous version. It’s now a 3 column layout with statically set widths for most of the items. It’s best viewed at 1680 x 1050 but it’s still usable at lower resolutions. The left column has a set of check boxes for choosing what information you want to see and a filter box at the bottom. The center column is a map that when clicked will change your current location to where you clicked, so you can view an information feed from another area. The right column is for part of a future release that will allow you to send requests to other Geon users in that area for pictures/video/text, and also allow you to respond to requests. For now the right column will only notify you when you change your location, but soon it will display all the recent Geon requests and responses for your area.

Ticking any of the boxes will bring up information from that source. For Twitter, News and Blogs this will appear in the left column as text and links. For Flickr  Should you wish to apply a filter for it, say for the rally that took place in Sydney on the weekend, you can put your filter terms in the box below. This is handy if you want to see an information stream for an event that might not be hitting the front page news, as most of it gets drowned out by the headlines. Geon will initially retrieve a maximum of 15 results for each of the services regardless of time frame and will attempt live updates after that. One thing I have noticed is that blog posts and news items will usually be at the bottom due to their publishing time. Tweets, as per their nature, will usually be at the top. The pause check box shows whether or not the feed is attempting to live update, and similar to the last Geon release there are a few reasons why you’d want to stop it (want to read your feed without the scroll bar snapping back up to the top is one).

Now for the juicy bit. Scroll to some location on the map that you’d like to see news for and single click. After a short delay you’ll notice in the response box a message telling you that you have moved to a new location. Your information feed will now start to update from this new area. If you had information from another location open previously it will slot the information in chronologically. If you want to clear your information view before switching location just untick all the boxes and it will clear the feed list for you. I’ve noticed that if you’re viewing a busy area then switch to a quieter one (say from Sydney to Canberra) the new information will be buried in the midst of the old, which is probably not entirely useful.

And now for the all important bugs/known issues:

  • Internet Explorer is still unsupported: For the most part everything seems to work ok except for when you click the map. There’s still a wide discrepancy between where you click on the map and where it thinks you clicked. Firefox and Chrome appear fine and this could just be an IE8 issue however I haven’t taken the time to test IE6/7, mainly because I have no idea why an ASP.NET application would be having troubles in IE and not Firefox.
  • The Pause checkbox is a little iffy: For the most part it works fine but there are times when clicking it will not change the state of the timer on the page, and it will keep trundling along as if nothing happened. I’ve just thought of a way to fix it (GARGH why didn’t I see that yesterday) so I’ll fix it up when I get home.
  • Feed updates are delayed by about 30~90 seconds: Anyone following me on Twitter will have noticed me tweeting to test my live updates. Since the design is based off reading a RSS feed from Twitter and other various sources it should show up as soon as the feed is updated (which appears to be real time). However it sometimes takes a minute or two to update. I’ve got a feeling this is due to some caching my RSS client library does, so I’ll have to work on that one.
  • Feeds in busy places shuffle themselves around: If you’re watching a busy place like New York you might notice the feed rearranging itself. This is because I sort the feed based on the date and if they’re the same (which happens a lot with tweets) it arbitrarily arranges them. This makes the feed shuffle around a bit and is currently the best solution to the problem which initially was that the feed was completely random.

What’s in scope for 1.2? I’m glad you asked (even if you didn’t ;)):

  • Implement the request/respond function: This will probably take the better part of a weekend to get done. I had it planned for this release but it got dropped since I had other things to do other than just coding.
  • Make it pretty: Right now it’s dull as dishwater to look at. It needs to be made a little better looking and if anyone out there is interested in some design/development work in order to make Geon look better feel free to contact me. You will be paid for your services.
  • Add in an information timeline: Much like Google Wave’s slider bar that allows you to see how information evolved over a period of time I want to implement something similar in Geon.

So that’s about it. I’m still taking all feature requests/ideas for Geon so if you think something would be cool or useful just leave a comment or give me an email.

Presenting…..Geon! (and 100 posts!)

Well it may be late on a Sunday afternoon but here it is, my 100th blog post! It’s been quite a fun exercise for me and I’m hoping to bring you many more posts in the future. Hopefully they will all be interesting, but I can guarantee that ;). The past 7 months have seen many changes in both my personal and professional life and I feel that this blog has reflected that. I’ve been able to craft my thoughts much more succinctly after writing so much, and my spelling has definitely improved. It’s also introduced me to the wonderful world of web applications, something that I’ve kept away from in the past. All of this would be for nothing if it wasn’t for you, my readers. I just want to say thanks for coming back day after day and reading and commenting on my site, it really does mean a lot when people care about what you have to say 🙂

As promised I have been working on something secretly in the background, and today marks it’s 1.0 release to the public. It’s a hacky, cobbled together web application that will form the basis of a future application that I want to develop. For now I’ll be working on it under the code name Geon which stands for Geological Information, although the final product will be a lot more then that.

For a taste hop on over to here. Also available from the Geon link in The Lab. Click around, see what you think it’s supposed to do then come on back here. If you can write down your impressions of it before you read on, I want to see what everyone thinks about it before I mess your perception with my ideas 🙂

In essence the application is part of a framework for real time information feed based upon location. Right now it gets content from Twitter, Flickr and additionally everyone in the same city (roughly) can talk to each other. The Flickr and Twitter buttons will bring up markers at your location, whilst clicking directly on the map will bring up Flickr pictures and Twitter posts that are located within that area. When you begin chatting it will start to do live updates from your area with other people who are chatting, you can disable this by unchecking the box (you’ll see why you might want to do this in a sec). You can change your user name to, the random string of numbers is mostly me being lazy and no implementing a full user database, that’s on the cards for the future.

Currently it will only return the first 10 Twitter posts but it will return all the Flickr pictures in the area. I wanted to get the chats popping up there as well for this release however I haven’t found a way to get the info windows to update dynamically, I believe this is a limitation of the api wrapper I’m using. Also if you’re chatting any information from outside your area will probably be cleared when it next refreshes. This seems to be a fun bit of AJAX that isn’t supposed to happen, but any partial post back triggers the map to update itself.

Here’s what I think is wrong with it so far (in terms of bugs):

  • Internet Explorer doesn’t work properly. The click event handler seems to report a wildly different location in IE then it does in Firefox/Chrome. For now, IE is unsupported and I’ll recommend Firefox for anyone who’s having trouble using it.
  • The chat inserts new lines at the top rather then at the bottom. This is because ASP doesn’t have a clean way to put the chat messages at the bottom and keep the scroll bar there. To save everyone scrolling down whenever they post a message or when it updates I thought it best to put them at the top.
  • Live updates kill any information on the map that wasn’t added in a certain way. For some reason any partial render of the screen causes the map to think it has to do a postback to. I haven’t been able to disable this but when you use the buttons at the bottom this information won’t be wiped. The functions are basically identical, but I can’t get information from clicking on the map to be persistent. I’ve wrote to the author of the wrapper about this, we’ll see what he says.

So what’s the big idea for all this? Well what I wanted to make was an application where you could zoom in on an area and see what’s going on there. This application does most of that now but what I’m looking to do is to build in a request information section and then anyone who’s on Geon (it will be available on mobiles….one day!) can submit pictures/text/whatever back up. I thought this would be amazing for breaking news events as long as there was enough users of course 🙂

I’d love to hear what everyone thinks about it and what you believe would be great to add in. I’ve already got a Google Wave integration idea in the works which I’m sure everyone will like. Experience has shown me that your users are the ones who matter, so I’m opening up the floodgates for you guys to craft the direction Geon takes over the coming months.