Posts Tagged‘data’

Negative Gearing Income 2010-11

Think Negative Gearing is to Blame For High House Prices? Think Again.

Before I dig my hooks into the reasons why negative gearing isn’t to blame for high house prices (a seemingly controversial view these days) I will tell you, in the interests of full disclosure, that I’ve been negatively gearing property for the past 5 years or so. Back when we first bought our property I lamented the dearth of good properties that were available in our price range, focusing much of my anger of the property boom that took place mere years before we went into buy. However we found something that we could just afford if we played our cards right, even though it was out in the sticks of Canberra. During that time though I never once blamed the negative gearers for this predicament but the more I talk about it the more it seems my generation blames investors for it when they should really be looking elsewhere.

Negative Gearing Income 2010-11

Depending on what figures you’ve read though I’d find it hard to blame you like the table above (from this ATO document) that has been doing the rounds lately. On the surface it seems pretty hefty with some $7.8 billion in total losses being claimed by investors with negatively geared property. Realistically though the total cost to the government is far less than that as even if everyone was on the top marginal rate (which they aren’t, most are on $80,000 per year or less) the total tax revenue loss is closer to  $3.5 billion. Out of context that sounds like a lot of dosh, especially when this year’s budget came in at a deficit of $18 billion, but it’s like 0.9% of total tax revenue which is significantly dwarfed by other incentives and exemptions. If your first argument is that it costs the government too much then you’re unfortunately in the wrong there, but that’s not the reason I’m writing this article.

The typical narrative against negative gearing usually tells a story of investors competing against homebuyers (usually first timers), driving up the price because they are more able to afford the property thanks to negative gearing and the higher amount of capital that they have. Whilst I won’t argue that this never happens it fails to take into account the primary driver for upward trending house prices: owner occupiers. Initially this idea sounds ludicrous, since homeowners aren’t taking advantage of negative gearing gains nor are they in the market for new property, but the thing is that the vast majority of capital gains in Australia are held by just such people, to the tune of 84% of the total property market.

In Australia the primary mechanism which drove house prices up, with most of the increase occurring between 1994~2004, was current home owners upgrading their houses. For a current homeowner  especially ones that own their property outright, the cost of upgrading to a larger property is a fraction of what it would cost to buy it outright. However anyone looking to upgrade will also try to extract the maximum amount of value out of their house in order to reduce the resulting loan and thus the cheaper priced houses get pushed up as well. Couple that with the fact that the majority of Australian owner/occupiers move at least once every 15 years and that selling your primary place of residence is exempt from capital gains tax and you have a recipe for house prices going up that’s not predicated on negative gearing’s influence.

Indeed the ABS Household Wealth and Wealth Distribution supports this theory as the average value of an owner occupied property is $531,000 which is drastically higher than the Australian average (which includes all investor properties) at $365,000. Considering that the bulk of the Australian property market is dominated by owner-occupiers (since investors only make up 16% of it) then its hard to see how they could be solely responsible for the dramatic increases that many seem to blame them for. Most will retort that investors are snapping up all the properties that would be first home owners would get which is something I can’t find any evidence for (believe me, I’ve been looking) and the best I could come up with was the distribution of investment property among the 5 sections shown here which would lead you to believe that the investors are normally distributed and not heavily weighted towards the lower end.

The final salvo shot across the negative gearing bow usually comes in the form of it providing no benefit to Australia and only helps to line the pockets of wealthy investors. The counter argument is that negative gearing helps keeps rent costs down as otherwise investors would be forced to pass on the majority of the cost of the mortgage onto renters, something we did see when negative gearing was temporarily removed. Indeed the government actually comes off quite well for this investment as using that revenue to instead build houses would result in a net loss of rentable dwellings which would put an upward pressure on rents.

I completely understand the frustration that aspiring home buyers go through, I went through it myself not too long ago when I was in a position that wasn’t too different from average Australian. But levelling the blame at investors and those who negatively gear their property for the current state of the Australian property market is at best misguided and at worse could lead to policy decisions that will leave Australia, as a whole, worse off. You may believe to the contrary, and if you do I encourage you to express that view in the comments, as the current Australian property market is a product of the Great Australian Dream, not negative gearing.

Is It Wrong That I Find The President’s Surveillance Program…Intriguing?

I’m no conspiracy theorist, my feet are way too firmly planted in the world of testable observations to fall for that level of crazy, but I do love it when we the public get to see the inner workings of secretive programs, government or otherwise. Part of it is sheer voyeurism but if I’m truthful the things that really get me are the big technical projects, things that done without the veil of secrecy would be wondrous in their own right. The fact that they’re hidden from public view just adds to the intrigue, making you wonder why such things needed to be kept secret in the first place.

One of the first things that comes to mind was the HEXAGON series of spy satellites which were high resolution observation platforms launched during the cold war that still rival the resolution of satellites launched today. It’s no secret that all space fairing nations have fleets of satellites up there for such purposes but the fact that the USA was able to keep the exact nature of the entire program secret for so long is quite astounding. The technology behind it though was what really intrigued me as it really was years ahead of the curve in terms of capabilities, even if it didn’t have the longevity of its fully digital progeny.

Yesterday however a friend sent me this document from the Electronic Frontier Foundation which provides details on something called the Presidential Surveillance Program (PSP). I was instantly intrigued.

According to William Binney, a former head of the National Security Agency the PSP is in essence a massive data gathering program with possible intercepts at all major fibre terminations within the USA. The system simply siphons off all incoming and outgoing data which is then stored in massive, disparate data repositories. This in itself is a mind boggling endeavour as the amount of data that transits the Internet in a single day dwarfs the capacity of most large data centres. The NSA then ramps it up a notch by being able to recover files, emails and all sorts of other data based on keywords and pattern matching which implies heuristics on a level that’s just simply mind blowing. Of course this is all I’ve got to go on at the moment but the idea itself is quite intriguing.

For starters creating a network that’s able to handle a direct tap to a fibre connection is no small feat in itself. When the fibres terminating at the USA border are capable of speeds in the GB/s range the require infrastructure to handle that is non-trivial, especially so if you want to store that data later. Storing that amount of data is another matter entirely as most commercial arrays begin to tap out in the petabyte range. Binney’s claims start to seem a little far fetched here as he states there are plans up into the yottabyte range but concedes that current incarnations of the program couldn’t have more than tens of exabytes. Barring some major shake up in the way we store data I can’t fathom how they’d manage to create an array that big. Then again I don’t work for the NSA.

As intriguing as such a system might be there’s no question that its existence is a major violation of privacy for US citizens and the wider world. Such a system is akin to tapping every single phone and recording every conversation on it which is most definitely not supported by their current legal system. Just because they don’t use it until the have a reason to doesn’t make it just either as all data gathered without the suspicion of guilt or pretence to commit a crime is illegitimate. I could think of many legitimate uses for the data (anonymous analytical stuff could prove very useful) but the means by which its was gathered eliminates any purpose being legitimate.

Sony, Security and Superiority Complexes.

I’m not really sure I could call myself a fan boy of any technology or company any more. Sure there are there are some companies who’s products I really look forward to but if they do something completely out of line I won’t jump to their defense, instead choosing to openly criticize them in the hopes that they will get better. Still I like to make known which companies I may look upon with a rose tint just so that anyone reading these posts knows what they’re getting themselves into. One of these such companies is Sony who I’ve been a long time fan of but have still criticized them them when I’ve felt they’ve done me wrong.

Today I’ll be doing that once again.

As you’re probably already aware recently the Playstation Network (PSN), the online network that allows PS3 owners to play with each other and buy digital content, was compromised by an external entity. The attackers appear to have downloaded all account and credit card information stored on Sony’s servers prompting them to shut down the service for an unknown amount of time. The breach is of such a large scale that it has received extensive coverage in both online and traditional news outlets, raising questions about how such a breach could occur and what safeguards Sony actually has to prevent such an event occurring.

Initially there was little information as to what this breach actually entailed. Sony had chosen to shutdown the PSN to prevent any further breaches and left customers in the dark as to the reason for this happening. It took them a week to notify the general public that there had been a breach and another 4 days to contact customers directly. Details were still scant on the issue until Sony sent an open letter to Congress detailing their current level of knowledge on the breach. Part of the letter hinted that the hacktivist group Anonymous may have played a part in the breach as well but did not blame them directly for the breach. More details have made themselves public since then.

It has also recently come to light that the servers that Sony was using for the PSN were running out-dated versions of the popular Apache web server and lacked even the most rudimentary security provisions that you’d expect an online service to have. This information was also public knowledge several months before the breach occurred with posts on Sony’s forums detailing the PSN servers status. As a long time system administrator I find it extremely ludicrous that the servers were able to operate in such a fashion and I’m pretty sure I know where to lay the blame.

Whilst Anonymous aren’t behind this attack they may have unwittingly provided cover for part of the operation. Their planned DDoS on the PSN servers did go ahead and would’ve provided a timely distraction for any would be attacker looking to exploit the network. Realistically they wouldn’t have been able to get much of the data out at this point (or so I assume, Sony’s servers could have shrugged off the DDoS) but it would have given them ample opportunity to set up the system for the data dump in the second breach that occurred a few days later.

No the blame here lays squarely with those in charge, namely the PSN architects and executives. The reason I say this is simple, an engineer worth his salt wouldn’t allow servers to run unpatched without strict security procedures in place. To build something on the scale of the PSN requires at least a modicum of expertise so I can’t believe that they would build a system like that unless they were instructed to do so. I believe this stems from Sony’s belief that the PS3 was unhackable and as such could be trusted as a secure endpoint. Security 101 teaches you though that any client can’t be trusted with the data that it sends you however and this explains why Sony became so paranoid when even the most modest of hacks showed the potential for the PS3 to be exploited. In the end it was Sony’s superiority complex that did them in, pretending like their castle was impregnable.

The fallout from this incident will be long and wide reaching and Sony has a helluva lot of work to do if they’re going to fully recover from this damage. Whilst they’re doing the right thing in offering some restitution to everyone who was affected it will still take them a long time to rebuild all the good will that they’ve burned on this incident. Hopefully though this teaches them some valuable lessons on security and they’ll stop thinking they’re atop the impregnable ivory tower. In the end it will be worth it for Sony, if they choose to learn from their mistakes.

Is Tethered Internet Usage So Different?

I remember getting my first ever phone with a data plan. It was 3 years ago and I remember looking through nearly every carrier’s offerings to see where I could get the best deal. I wasn’t going to get a contract since I change my phone at least once a year (thank you FBT exemption) and I was going to buy the handset outright, so many of the bundle deals going at the time weren’t available to me. I eventually settled on 3 mobile as they had the best of both worlds in terms of plan cost and data, totaling a mere $40/month for $150 worth of calls and 1GB of data. Still when I was talking to them about how the usage was calculated I seemed to hit a nerve over certain use cases.

Now I’m not a big user of mobile data despite my daily consumption of web services on my mobile devices, usually averaging about 200MB/month. Still there have been times that I’ve really needed the extra capacity like when I’m away and need an Internet connection for my laptop. Of course tethering the two devices together doesn’t take much effort at all, my first phone only needed a driver for it to work, and as far as I could tell the requests would look like they were coming directly from my phone. However the sales representatives told me in no uncertain terms that I’d have to get a separate data plan if I wanted to tether my handset or if I dared to plug my sim card into a 3G modem.

Of course upon testing these restrictions I found them to be patently false.

Now it could’ve just been misinformed sales people who got mixed up when I told them what I was planning to do with my new data enabled phone but the idea that tethered Internet usage is somehow different to normal Internet usage wasn’t a new idea to me. In the USA pretty much every carrier will charge you a premium on top of whatever plan you’ve got if you want to tether it to another device, usually providing a special application that enables the functionality. Of course this has spurred people to develop applications that circumvent these restrictions on all the major smart phone platforms (iOS users will have to jailbreak unfortunately) and the carriers aren’t able to tell the difference. But that hasn’t stopped them from taking action against those who would thwart their juicy revenue streams.

Most recently it seems that the carriers have been putting pressure on Google to remove tethering applications from the Android app store:

It seems a few American carriers have started working with Google to disable access to tethering apps in the Android Market in recent weeks, ostensibly because they make it easier for users to circumvent the official tethering capabilities offered on many recent smartphones — capabilities that carry a plan surcharge. Sure, it’s a shame that they’re doing it, but from Verizon’s perspective, it’s all about protecting revenue — business as usual. It’s Google’s role in this soap opera that’s a cause for greater concern.

Whilst this is another unfortunate sign that no matter how hard Google tries to be “open” it will still be at the mercy of the carriers their banning of tethering apps sets a worrying precedent for carriers looking to control the Android platform. Sure they already had a pretty good level of control over it since they all release their own custom versions of Android for handsets on their network but now they’re also exerting pressure over the one part that was ostensibly never meant to be influenced by them. I can understand that they’re just trying to protect their bottom line but the question has to be asked: is tethering really that much of a big deal for them?

It could be that my view is skewed by the Australian way of doing things, where data caps are the norm and the term “unlimited” is either a scam or at dial-up level speeds. Still from what I’ve seen of the USA market many wireless data plans come with caps anyway so the bandwidth argument is out the window. Tethering to a device requires no intervention from the carrier and there are free applications available on nearly every platform that provide the required functionality. In essence the carriers are charging you for a feature that should be free and are now strong-arming Google into protecting their bottom lines.

I’m thankful that this isn’t the norm here in Australia yet but we have an unhealthy habit of imitating our friends in the USA so you can see why this kind of behavior concerns me. Since I’m also a firm believer in the idea that once I’ve bought the hardware its mine to do with as I please and tethering falls under that realm. Tethering is one of those things that really shouldn’t be an issue and Google capitulating to the carriers just shows how difficult it is to operate in the mobile space, especially if you’re striving to make it as open as you possibly can.

What’s The Use Case For a Cellular Tablet?

So I’m sold on the tablet idea. After resisting it since Apple started popularizing it with the iPad I’ve finally started to find myself thinking about numerous use cases where a tablet would be far more appropriate than my current solutions. Most recently it was after turning off my main PC and sitting down to watch some TV shows, realizing that I had forgotten to set up some required downloads before doing so. Sure I could do them using the diNovo Mini keyboard but it’s not really designed for more than logging in and typing in the occasional web address. Thinking that I’d either now have to power my PC or laptop on I lamented that I didn’t have a tablet that I could RDP into the box with and set up the downloads whilst lazing on the couch. Thankfully it looks like my tablet of choice, a wifi only Xoom, can be shipped to Australia via Amazon so I’ll be ordering one of them very soon.

Initially I thought I’d go for one of the top of the line models with all the bells and whistles, most notably a 3G/4G connection. That was mostly just for geek cred since whenever I’m buying gadgets I like to get the best that’s on offer at the time (as long as the price isn’t completely ludicrous). After a while though I started to have a think about my particular use patterns and I struggled to find a time where I’d want to use a tablet and be bereft of a WiFi connection, either through an access point or tethered to my phone. There’s also the consideration of price with all non-cellular  tablets is usually quite a bit cheaper, on the order of $200 with the Xoom. It then got me thinking, what exactly is the use case for a tablet with a cellular connection?

The scenarios I picture go something along these lines. You’re out and about, somewhere that has mobile phone reception, but you don’t have your phone on you (or one not capable of tethering) and you’re no where near a WiFi access point. Now the possibility of having mobile phone reception but no WiFi is a pretty common event, especially here in Australia, but the other side to that potential situation is you either can’t tether to your mobile phone because its not capable or you don’t have it on you. Couple that with the fact that you’re going to have to pay for yet another data plan just for your new tablet then you’ve really lost me as to why you’d bother with a tablet that has cellular connectivity.

If your reason for getting cellular connectivity is that you want to use it when you don’t have access to a WiFi hard point then I could only recommend it if you have a phone that can’t tether to other devices (although I’d struggle to find one today, heck even my RAZR was able to do it). However, if I may make a sweeping statement, I’d assume that since you’ve bought a tablet you already have a smart phone which is quite capable of tethering, even if the carrier charges you a little more for it (which is uncommon and usually cheaper than a separate data plan). The only real reason to have it is for when you have your tablet but not your phone, a situation I’d be hard pressed to find myself in and not be within range of an access point.

In fact most of the uses I can come up with for a tablet device actually require them to be on some kind of wireless network as they make a fitting interface device to my larger PCs with all the functions that could be done on cellular networks aptly covered off by a smartphone. Sure they might be more usable for quite a lot of activities but they’re quite a lot more cumbersome than something that can fit into my pocket and rarely do I find myself needing functionality above that of the phone but below that of a fully fledged PC. This is why I was initially skeptical of the tablet movement as the use cases were already aptly covered by current generation devices. It seems there’s quite a market for transitional devices however.

Still since nearly every manufacturer is making both cellular and wireless only tablets there’s got to be something to it, even if I can’t figure it out. There’s a lot to be said about the convenience factor and I’m sure a lot of people are willing to pay the extra just to make sure they can always use their device wherever they are but I, for one, can’t seem to get a grip on it. So I’ll put it out to the wisdom of the crowd: what are your use cases for a cellular enabled tablet?

Necessity is the Mother of Invention.

I’ve been developing computer programs on and off for a good 7 years and in that time I’ve come across my share of challenges. The last year or so has been probably the most challenging of my entire development career as I’ve struggled to come to grips with the Internet’s way of doing things and how to enable disparate systems to talk to each other. Along the way I’ve often hit various problems that on the surface appear to be next to impossible to do or I come to a point where a new requirement makes an old solution no longer viable. Time has shown however that whilst I might not be able to find an applicable solution through hours of intense Googling or RTFM there are always clues that lead to an eventual solution. I’ve found though that such solutions have to be necessary parts of the larger solution otherwise I’ll just simply ignore them.

Take for instance my past weekend’s work gone by with Lobaco. Things had been going well, the last week’s work had seen me enable user sign ups in the iPhone application and had the beginnings of an enhanced post screen that allowed users to post pictures along with their posts. Initial testing of the features seemed to work well and I started testing the build on my iPhone. Quickly however I discovered that both the new features I had put in struggled to upload images to my web server, crashing whenever a picture was over 800 by 600 in size. Since my web client seemed to be able to handle this without an issue I wondered what the problem would be, so I started digging deeper.

You see way back when I had resigned myself to doing everything in JavaScript Object Notation, or JSON for short. The reason behind this was that thanks to it being an open standard nearly every platform out there has a native or third party library for serialising and deserialising objects, making my life a whole lot easier when it comes to cross platform communication (I.E. my server talking to an iPhone). Trouble with this format is that whilst it’s quite portable everything done in it must be text. This causes a problem for large files like images as they have to be changed into text before they can be sent over the Internet. The process I used for this is called Base64 and it has the unfortunate side effect of increasing the size of the file to be transferred by roughly 37%. It also generates an absolutely massive string that brings any debugger to its knees if you try to display it, making troubleshooting issues hard.

The image uploading I had designed and successfully used up until this point was now untenable as the iPhone client simply refused to play nice with ~300KB strings. I set about trying to find a solution to my problem hoping to find a quick solution to my problem. Whilst I didn’t find a good drag and drop solution I did come across this post which detailed a way in which to program a Windows web service that could receive arbitrary data. Implementing their solution as it is detailed there still didn’t actually work as advertised but after wrangling the code and over coming the inbuilt message size limits in WCF I was successfully able to upload images without having to muck around with enormous strings. This of course did mean changing a great deal of how the API and clients worked but in the end it was worth it for something that solved so many problems.

The thing is before I went on this whole adventure had you asked me if such a thing was possible I would’ve probably told you no, at least not within the whole WCF world. In fact much of my early research into this problem was centred around possibly implementing a small PHP script to accomplish the same thing (as there are numerous examples of that already), however the lack of integration with my all Microsoft solution means I’d be left with a standalone piece of code that I wouldn’t have much interest in improving or maintaining. By the simple virtue that I had to come up with a solution to this problem meant I tried my darnedest to find it, and lo I ended up creating something I couldn’t find anywhere else.

It’s like that old saying that necessity is the mother of all invention and that’s true for both this problem and Lobaco as an idea in itself. Indeed many of the current great Internet giants and start ups were founded on the idea of solving a problem that the founders themselves were experiencing and felt that things could be better. I guess I just find it fascinating how granular a saying like that can be, with necessity driving me to invent solutions at all levels. It goes to show that embarking into the unknown is a great learning experience and there’s really no substitute for diving in head first and trying your hardest to solve an as of yet unsolvable problem.

The Data Authenticity of Geo-Social Networks.

The last couple weeks have seen me make some pretty amazing progress with the new version of Geon. I’ve settled on a name for the service, managed to get a 4 letter TLD to host it under and the Silverlight client has seen a massive redesign that drove a complete rework of the underlying API. It’s been quite a learning experience and I’ve encountered quite a few problems along the way that have served to give me some insight into the issues that the big guys probably had when they were first starting out. Whilst the system currently only has a user of one (well 3, the Anonymous user, myself and a friend’s identity I stole to test out some features) I still got to thinking about the authenticity of my data and how I was going to manage that.

I first encountered this when I was coding up the login system for Geon. Originally it was based around the built in Windows Communication Framework Authentication Servicewhich, whilst being a down right pain to get working initially, provided all the necessary security for my web application without me having to think about how it got the job done. Unfortunately though this wouldn’t work too well when I moved away from the .NET platform, namely to either Android or the iPhone, as they don’t have any libraries that support this. So as part of my complete client redesign I thought it best to not rely on anything that I couldn’t use on my other platforms and that meant building the Silverlight client as if it was a mobile phone.

In all seriousness I would’ve been completely lost if I hadn’t stumbled upon Tim Greenfield’s blog, specifically this postwhich outlined the core ideas for implementing a secure login system that uses RIA services. After doing some rough designs and mulling the idea in my head for a couple weeks I got a working implementation of it a couple weeks ago, allowing a user to login without having to rely on the built in Microsoft frameworks. Initially everything was looking good and I went ahead coding up the other parts of the application thinking that my bare bones implementation would suffice for the use cases I had in mind.

However after a while I began to think about how easy it would have been to a nefarious (or just plain curious) user to be able to wreck untold havok on my system. You see the login function needed 4 parameters: the user name, password, IP address and whether or not this session should be remembered next time the user visits the page. The IP address was for security as if someone manages to get your session ID they could theoretically use that to hijack your session and do all sorts of mean things with your account. In my implementation the IP address was passed up as part of the request which meant that anyone looking to perform a session hijack would simply have to pass up the valid IP for that session and I’d be none the wiser. Realising that it would be an issue I implemented server side IP detection which would make it quite a lot harder to get the magic combination session ID and IP address correct, making my service just that more secure.

This got me thinking about the authenticity of the data which I was going to be collecting from my users. I’m not putting any limitations on where people can post but I’m going to be flagging people as “out of area” when they’re posting or responding to something that’s not near their current location. However since I want to make the API open I have to make the co-ordinates part of the update request which will unfortunately open it up to the possibility of people faking their location. Not that there would be a whole lot to gain from doing so but if my feed reader has taught me anything recently its that the geo-social networking space is constantly grappling with this issue and there’s really no good solution for it.

There seems to be two schools of thought on the idea of data authenticity when it comes to the location space. The Foursquare approach is one of mostly indifference as whilst they have a cheater code to deal with people trying to get that elusive mayor title they seem to have no problem with those who check-in where their friends are or if you create a fake venuefor others to check in to. I’m not surprised at their reaction as both of those kinds of behaviour mean people are using their service and are finding new, inventive ways of using it which could potentially translate into new features for their service. The second is of strict “no fakery here” policy that Gowalla has taken with their 6 commandmentsof their API. Whilst they’re still opening themselves open to abuse the no tolerance policy on it suggests that they value data integrity much higher than Foursquare. Clamping down on fake check-ins would mean that their data is more reliable and thus more valuable than Foursquare’s but that comes down to what you’re using it for, or who you’re selling it to.

Personally I’m in favour of the Gowalla route but only because there’s little value in faking location data in my application. Sure there are potential scenarios where it might be useful but since I’m not placing any restrictions (only identifying out of area people) I can’t really see why anyone would want to do it. That might change when I put in the social game mechanics in  and I actually get some users on the service but that’s a bridge I’ll cross when I come to it. Right now the most important thing is trying to get it out the damn door.

I’m hoping that will be soon as once I get the core in I get to buy a Macbook Pro to code on, yay! :D