The Oculus Rift Kickstarter campaign showed that there was a want for virtual reality to start making a comeback. However the other side of that equation, the ones who’d be delivering experiences through the VR platform, weren’t really prepared to capitalize on that. There are numerous reasons for this but mostly it comes down to consumer VR still being a nascent industry with the proper tooling still not there to make the experience seamless. Unfortunately it’s something of a chicken and egg problem: standards and tooling won’t fully emerge until there’s a critical mass of users and those users won’t appear until those standards are in place. This is why the high price of the Oculus Rift consumer model costs far more than its sticker price.
Many looked towards the Oculus Rift as the definitive VR headset, something which Oculus has obviously taken into account when designing it. Whilst I, as an early adopter of many pieces of technology, may appreciate the no-holds-barred approach for devices like this I know this limits broader appeal. Whilst this is sometimes a good strategy in order to get your production line stood up (ala Tesla when they produced the Roadster and then the Model S) the Oculus already had that in the previous two iterations of the dev kit. I think what many were expecting then was the Model T of VR headsets and what they got instead was a Rolls Royce Phantom.
However Oculus is no longer the only name in the game anymore with both the HTC VIVE PRE and the PlayStationVR headsets scheduled to come out in the first half of this year. Both of these are targetting at much more reasonable price point, although they admit that their headsets are not as premium as the Oculus Rift is. Whilst Oculus’ preorders may have surpassed their expectations I still feel that they alienated a good chunk of their market going for the price point that they did. For those who balked at the Oculus’ price the other two headsets could prove to be a viable alternative and that could spell trouble for Oculus.
Whilst Oculus won’t be going anywhere soon as a company (thanks entirely to the Facebook acquisition) they will likely struggle to cement their position as the market leader in the VR headset space. Indeed the higher price point, which according to Oculus is the bare minimum they can charge for it, won’t come down significantly until economies of scale kick in. Lower sales volumes means that takes much longer to come into effect and, potentially, HTC and Sony could be well on their way to mass produced headsets that are a fraction the cost of the Oculus.
In the end it comes down to which of the headsets provide a “good enough” experience for the most attractive price. There will always be a market for a premium version of a product however it’s rare that those models are the ones most frequently purchased. Oculus’ current price point puts it out of the reach of many, a gap which HTC and Sony will rush into fill in no short order. The next year will then become a heated battle for who takes the VR crown, showing which product strategy was the right one. For now my money is on the cheaper end of the spectrum and I’m waiting to be proved wrong.
Why the Abbott government hasn’t abandoned their incredibly unpopular metadata policy yet is beyond me. Nearly all other developed nations that have pursued such a policy have abandoned it, mostly because attempting to pass something like this is akin to committing political suicide. Worse still in their attempts to defend the policy from its critics the Abbott government has resorted to tactics and sensationalist rhetoric, none of which has any bearing on the underlying issues that this policy faces. Top this off with a cost estimation that seems to be based on back of the napkin math and you’ve got a recipe for bad legislation that will likely be implemented poorly and at a great cost to all Australian citizens.
Conceptually the idea is simple: the government wants to mandate that all ISPs and communications providers keep all metadata they generate for a period of 2 years. Initially this was sold as not being an increase in the power that authorities had however that idea is incredibly misleading as it greatly increases their ability to exercise that power. Worse still obtaining access to metadata doesn’t require a warrant and isn’t just the realm of law enforcement or intelligence agencies as people on local councils can obtain this data. Suffice to say that the gathering and retention of this data is a massive invasion of the privacy that the general public expects to have from its government and that is exactly why nearly all developed nations have dropped such policies before they’ve been implemented.
As expected the usual tropes for these kinds of policies have been trotted out, initially under the guise of a requirement for national security. I’d concede that point if it wasn’t for the fact that mass surveillance has not proved to be effective in combating terrorism, something which the critics of the policy were quick to point out. The rhetoric has then shifted away from national security to local security with Abbott saying that the metadata will help them track down peadophiles and child traffickers. Suffice to say if surveillance of this nature doesn’t help at a national level then I highly doubt its effectiveness at the lower levels and “think of the children” arguments like this are nothing more than an appeal to emotion.
Yesterday Abbott was pressed to give some hard figures on just how much this scheme would end up costing and he retorted with the rather ineloquent quip that it would be an “explosion in an unsolved crime“. When pressed the figure he gave was $300 million, estimated to be less than 1% of the total $40 billion that the entire telecommunications sector is estimated to be worth. That figure has apparently been sourced from PricewaterhouseCoopers (PwC) however the details of that figure have not been made public. In all honesty I cannot see how that figure can be accurate given the amount of data we’re talking about and the retention times required.
To put it in perspective Australians consumed something on the order of 1 Exabyte in 6 months up to June last year which is a 50% increase on the year previous. The amount of metadata on that data would be a fraction of that and, taking the same 1% liberty that Abbott seems intent on using, you get something like 50 Petabytes worth of storage required. Couple that with the fact that it won’t be stored in one place (negating economies of scale), the infrastructure requirements to provide access to it and the personnel required to fullfil requests and that $300 million figure starts to look quite shakey. Indeed the Communications Alliance in Australia has estimated it to be between $500 million and $700 million which casts doubt over how accurate Abbott’s lowball figure is.
Honestly this legislation stinks no matter which way you cut it and the rhetoric that the incumbent government has been using to defend it speaks directly to that. These policies are just simply not effective in what they set out to achieve and the only tangible result we’ll ever see from them will be an increased cost to accessing the Internet and a reduction in the expectation of privacy. I do hope Abbott keeps harping on about it though as the more he talks the more it seems likely that we’ll be able to cement the One Term Tony phrase in the history books.
Last week I wrote a post about the Solar Roadways Indiegogo campaign that had been sweeping the media. In it I did a lot of back of the envelope math to come up with some figures that made them seem reasonable based on my assumptions which lead me to the conclusion that they looked feasible with the caveat that I was working with very little information. Still I did a decent amount of research into some of the various components to make sure I was in the same order of magnitude. You’d then think that the venerable Thunderf00t’s takedown video on this project would put me at odds with him but, for the most part, I agree with him although there were a couple of glaring oversights which I feel require some attention.
FIrst off let me start off with the stuff that I agree with. He’s completely correct in the assertion that the tile construction isn’t optimal for road usage and the issues that arise from it are non-trivial. The idea of using LEDs sounds great in principle but as he points out they’re nigh on invisible in broad daylight which would make the road appear unmarked, a worrying prospect.Transporting the energy generated by these panels will also be quite challenging as the current produced by your typical solar panel isn’t conducive to being put directly on the grid. The properties of the road also require further validation as whilst the demonstrations shown by Solar Roadways say they’re up to standard there’s little proof to back up these claims so far. Finally the idea of melting snow seemed plausible to me on first look but I had not run any numbers against that claim so I’d defer to Thunderf00t’s analysis on this one.
However his claims about the glass are off the mark in many cases. Firstly it’s completely possible to make clear glass from recycled colour glass, usually through the use of additives like erbium oxide or manganese oxide. I agree on his point that it’s unlikely that they have the facilities available to them to do this right now however it’s not out of the realm of possibility. Thunderf00t also makes the mistake of taking a single item price of a piece of tempered glass off eBay and then uses that to extrapolate to the astronomical cost for covering all of the roads in the USA with it. In fact tempered glass produced at volume is actually rather cheap, about $7.50 per square meter, when you check out some large scale manufacturers. This makes the cost look far more reasonable than the $20 trillion that was originally quoted.
The same thing can be said for the solar panels, PCBs, LEDs and microcontrollers that are underneath them. Solar panels can be had for the low low price of $0.53 per watt (a grand total of about $30 per panel) and RGB LEDs for about $0.08/each (could have 1000 in each panel for $80). Indeed the cost of the construction of the panels themselves are likely to not be that expensive, especially at volume, however the preparation for the surface and the conduit channel are likely to be more expensive than your traditional road. This is because you’d likely have to do the same amount of site prep work for both of them (you can’t just lay these tiles into dirt) and then the panels themselves would be an incidental cost on top.
Tempered glass is also a lot harder than your regular type of glass, something which Thunderf00t missed in his analysis. It’s true that regular glass has a Mohs hardness of around 5 but tempered glass can be up to 7 or higher, depending on the additives used. Traditional road surfaces have a very similar hardness to that of tempered glass meaning they’d stuffer no more wear than a traditional road surface would. Whether this would mean a degradation in optical quality, and therefore solar efficiency, over time is something I can’t really comment on but the argument of sand and other things wearing away the surface doesn’t really hold up.
All this being said though Thunderf00t hits on the big issues that Solar Roadways has to face in order for their idea to become a reality. Whilst I’m still erring on the side of it being possible I do admit that there are numerous gaps in our knowledge of the product, many of which could quickly lead to it being completely infeasible. Still there’s potential for this idea to work in many areas, like the vast highways throughout Australia, even if some of the more outlandish ideas like melting the snow on them might not work out. It will be interesting to see how Solar Roadways reacts to this as there are numerous questions which can’t go unanswered.
Australia is an incredibly strong country economically being ranked as the 12th largest by GDP of all countries in the world. When you then consider that our population is a fraction of that of many countries that are above us (Canada is the closest in size and is in 11th spot with a population about 50% bigger than ours) it means that, on average, Australians are more wealthy than their global counterparts. This is somewhat reflected in the price we pay for certain things however it doesn’t take a lot of effort to show that we pay more than you’d expect for many goods and services. The most notable being media as we lack any of the revolutionary services that drive their prices down (Netflix, Hulu, etc.) or any viable alternatives. It gets even worse though as it seems we also pay more just to go to the cinema.
The graphic above shows that Australia, along with a few other developed nations, pay an extraordinary amount more than others do when the costs are normalized. The differences between the lowest and the highest aren’t exactly huge, you’re looking at a spread of about $15 from the cheapest to the most expensive, however this is yet another indication of just how much more Australia pays for its media than anyone else does. In essence we’re paying something on the order of 25%~50% more for the same product yet the excuses that the industry once relied on, that Australia is “really far away”, don’t really hold water anymore.
It should come as little surprise then that Australians are then far more likely to pirate than any other developed country, sometimes representing up to almost 20% of new release piracy. There have been some inroads made into attempting to reduce this number, with a few stations “fast-tracking” episodes (although they still usually carry a delay) or giving users access to an online option, however the former doesn’t solve the problem entirely and the latter was unfortunately repealed. The hunger for the media is there it’s just that a reasonably priced option has failed to materialize for Australian users (and if you mention Quickflix I’ll gut you) which has led to these dramatic figures.
Now I’d be entirely happy with doing the slightly dodgy and getting myself a Netflix or Hulu account via a VPN or geo-unblocking service however my bandwidth isn’t up to the task of streaming media at 720p. Sure it could probably do a lower resolution but I didn’t invest as much as I did in my entire home theatre system to have it operate at a sub-par level. This issue was supposed to go away with the NBN being just around the corner but I literally have no idea when that might be coming nor what incarnation of it I will end up getting. So it seems that, at least for now, I’m stuck in digital limbo where I either fall to piracy or being gouged repeatedly.
Neither of these issues are beyond fixing and indeed it’s been shown that once a reasonably priced alternative becomes available people ditch piracy in a heartbeat. Heck I know that for me once Steam became widely available my game spend increased dramatically, especially after I found sites like DLcompare. I can assure you that the same will happen once a media based alternative comes to Australia and I’m not the only one who has the disposable income to support it.
For PAX Australia this year my friends and I were left in a rather unenviable position. All of the Melbourne residents didn’t have the space to accommodate the 6 of us visiting and trying to find accommodation that would suit us was proving troublesome. Sure we could’ve booked multiple hotel rooms but the price wasn’t particularly great, on the order of $200 per night per room (of which we’d need 3). Whilst we’d previously used holiday homes for other adventures our usual websites weren’t coming up with anything, at least nothing in a reasonable price range. After mulling over the options I finally relented and gave Airbnb a go and the experience was pretty amazing.
Searching around for places that were close to PAX I found a couple places that were available for that weekend which could accommodate the lot of us. From the pictures most of them didn’t look like anything special but we weren’t going to be doing much there at all so I wasn’t particularly fussed. After jumping through a couple login hoops and laying down my credit card I had booked 3 nights accommodation for 6 people for a grand total of $600. If I had booked hotels to cover the same time period the cost would have been almost triple that, something which my travelling compatriots were very pleased with.
We were quite unlucky when it came to fly down as the weather saw many of the afternoon flights cancelled. I was worried that we’d get there too late and end up annoying our hosts but arriving at 9:30PM I was greeted by the couple who owned the house plus one of their friends. After dropping off all my gear I asked them if there was anywhere local I could get dinner and, to my surprise, they offered up the left overs from the dinner they had just packed up. They also gave us breakfast every morning, not that we stayed for it since we were usually meeting up with everyone at PAX.
Talking it over with our hosts it seemed like this experience wasn’t exactly uncommon as they had had several Airbnbers through previously all of which said similar things. Indeed all my friends who have used Airbnb since have commented on just how smooth the whole process is and how cheap the accommodation is when you compare it to hotel rooms of the same quality. This is even in a country where Airbnb doesn’t have that much use when compared to local equivalents (like Stayz).
It came as little surprise then that Airbnb has been shown to have positive effects for tourism in the areas in which its prevalent with guests often spending a lot more in the area than their hotel counter parts. I know that for myself personally the money that would’ve otherwise been spent on accommodation did end up in other places and I felt far more at ease with spending more knowing that my entire accommodation budget was only $100. At the same time I know that some of my friends might not have attended if the accommodation price was too high and Airbnb made it possible for them to come and not have to worry about it.
What Airbnb has above everyone else is the fact that their service just plain works, taking away all the barriers that would otherwise be required to book a stay at a non-hotel location. I was able to find a place, check it out, book it and send an email to everyone coming all in the space of 30 minutes, even without having used the service before. The only improvement I’d love to see (and feel free to correct me if this already exists within it) would be the ability to split the payment up and have everyone pay their share directly. It wasn’t too much of an issue for me however but it’s something that I’m sure a lot of people would love.
Now we just need Uber to start making their way around here, then I’ll be able to do all my travel needs from my smartphone. Now that’d be awesome.
Anyone who’s had a passing interest in computers has likely run up against the notion of Moore’s Law, even if they don’t know the exact name for it. Moore’s Law is a simple idea, approximately every 2 years the amount of computing power than can be bought cheaply doubles. This often takes the more common forms of “computer power doubles every 18 months” (thanks to Intel executive David House) or, for those uninitiated with the law, computers get obsoleted faster than any other product in the world. Since Gordon E. Moore first stated the idea back in 1970 it’s held on extremely well and for the most part we’ve beaten the predictions pretty handily.
Of course there’s been a lot of research into the upper limits of Moore’s Law as with anything exponential it seems impossible for it to continue on for an extended period of time. Indeed current generation processors built on the standard 22nm lithography process were originally thought to be one such barrier, because the gate leakage at that point was going to be unable to be overcome. Of course new technologies enabled this process to be used and indeed we’ve still got another 2 generations of lithography processes ahead of us before current technology suggests another barrier.
More recently however researches believe they’ve found the real upper limit after creating a transistor that consists only of a single atom:
Transistors — the basic building block of the complex electronic devices around you. Literally billions of them make up that Core i7 in your gaming rig and Moore’s law says that number will double every 18 months as they get smaller and smaller. Researchers at the University of New South Wales may have found the limit of this basic computational rule however, by creating the world’s first single atom transistor. A single phosphorus atom was placed into a silicon lattice and read with a pair of extremely tiny silicon leads that allowed them to observe both its transistor behavior and its quantum state. Presumably this spells the end of the road for Moore’s Law, as it would seem all but impossible to shrink transistors any farther. But, it could also points to a future featuring miniaturized solid-state quantum computers.
It’s true that this seems to suggest an upper limit to Moore’s Law, I mean if the transistors can’t get any smaller than how can the law be upheld? The answer is simple, the size of transistors isn’t actually a limitation of Moore’s Law, the cost of their production is.
You see most people are only familiar with the basic “computing power doubles every 18 months” version of Moore’s Law and many draw a link between that idea and the size of transistors. Indeed the size is definitely a factor as that means we can squeeze more transistors into the same space, but what this negates is the fact that modern CPU dies haven’t really increased in size at all in the past decade. Additionally new techniques like 3D CPUs (currently all the transistors on a CPU are in a single plane) have the potential to exponentially grow the number of transistors without needing the die shrinks that we currently rely on.
So whilst the fundamental limit of how small a transistor is might be a factor that affects Moore’s Law it by no means determines the upper limit; the cost of adding in those extra transistors does. Indeed every time we believe we’ve discovered yet another limit another technology gets developed or improved to the point where Moore’s Law becomes applicable again. This doesn’t negate work like that in the linked article above as discovering potential limitations like that better equips us for dealing with them. For the next decade or so though I’m very confident that Moore’s Law will hold up, and I see no reason why it won’t continue on for decades afterward.
There’s really only one thing that stops me from playing most of the Call of Duty series on the day of their release and that’s simply the price. Whilst the games will more than pay for themselves in terms of hours played vs hours worked to acquire them I’m still never happy shelling out $80+ for the game on Steam when it costs a whole lot less in another country. For Call of Duty: Black Ops then I simply waited long enough until it went on sale for half off before grabbing it which I was much happier to shell out, even if the overseas store also received it at half price. Still I had had enough people bugging me to get into this game ever since its release that I figured there had to be something good about it and strangely enough I’ve also been suckered into the multiplayer, something I usually avoid with these kinds of games.
Call of Duty: Black Ops takes place during the cold war with the vast majority of the missions being recounted in flash backs by the main character, Alex Mason. At the beginning you awake in an interrogation room, strapped into a chair and wired to an electric shock device. Your captures then start questioning you about the location of a numbers station and attempt to jog your memory by running you through past events and occasionally jolting you. If I’m honest I really don’t like having stories retold in flash backs as too often its used as an easy way to patch together a plot that’s made up of otherwise incongruent elements. It’s still serviceable however and if we’re honest with ourselves here no one is buying this game based solely on the plot of the single player campaign.
The cold war setting does make for some extremely interesting environments for the story to play out in. Whilst there’s no gratuitous space scenes like its predecessor there are an incredible amount of what I called “treat” scenes that just seemed to be in there to wow the player with eye candy and action hero style antics. The screenshot above is one of these such scenes where Mason is tasked with stopping the Soviet Union from launching Soyuz 1 and 2 with the mission culminating in shooting a prototype missile at the already launched craft. That’s not even the most ludicrous scene that plays out in Call of Duty: Black Ops but it was one of my most guilty pleasures in the game.
The Call of Duty series has done extremely well with creating a game experience where you feel both like the hero and part of something much greater all at the same time. Whilst I’m not adversed to being the lone hero in games I’ve found myself enjoying games that make you feel like a part of a bigger picture. The first game to get this feeling just right was Freelancer where in one of the later missions you join up with a large fleet as part of the final series of missions. Black Ops manages to recreate this feeling consistently with you almost never being alone and in many cases being surrounded by your fellow men, powering forward towards your goal.
The game play itself is nothing revolutionary but Treyarch have done their best to make sure that all of Black Ops isn’t just one long cover based shooter. Whilst you will be spending the vast majority of your time ducking in and out of cover in order to take out an inordinate amount of resistance there are several sections where you’ll be doing something out of the ordinary. Such things range from flinging explosives from hand made catapults to guiding soldiers on the ground from the cockpit of a SR-71 Blackbird. For the most part they’re welcome breaks from the almost constant combat that takes place but some proved to be more progression blockers than anything, especially if you missed the cue to do something out of the ordinary.
One such event was a section of the Vietnam missions where you’re fighting your way down an embankment. The actual goal of this particular section was to kick barrels of napalm in order to clear out the section up ahead. However if you’re like me you would have thought that it was just another run and gun section so I instantly made a break for a machine gun nest so that I could cover the rest of my team mates. Doing so took me out of ear shot of my companion who was instructing me to kick the barrels and thus I spent about 30 mins wondering why the game would put in a section with practically unlimited enemies in it. I eventually came within earshot and figured it out, but it still felt like there should have been an on screen prompt for those like me who might have been a bit too keen to man the guns.
Unlike it’s predecessor though I didn’t feel the same level of immersion with Call of Duty: Black Ops. I think this can be put down to the way the story was presented as each section stood pretty well on its own so that the breaks between them with the interviews felt like good places to stop if I felt even the slightest bit bored with it. Couple that with the epicness fatigue (I.E. after everything being so epic for so long you just don’t feel it anymore) you’ll undoubtedly suffer and the single player mission in Black Ops is best enjoyed in shorter bursts of 1~2 hours. That being said you’ll more than likely be done with the entire game in 5 sittings in doing that, so it’s not the worst thing in the world.
Once the single player is over however many of Call of Duty: Black Ops’ players will spend many more hours in the multi-player, and rightly so. Realistically the single player of any Call of Duty game is the hook with which to draw people into multi as that’s where the player base spends the vast majority of its time. Coming into a multi-player game this late in it’s release was something I wasn’t looking forward to, thinking that I’d do a couple hours just for the review and then be done with it before I raged like I used to back in my Counter Strike days. Strangely enough though I found myself quite enjoying the multi-player experience, to the point of playing it for as long as I had played the single player.
If you’ve played any Call of Duty (or any multi-player FPS for that matter) the game modes that are available in Black Ops will be familiar to you. Indeed not much about it differs from previous Call of Duty games with the persistent levels and ability to customize your class being the main hooks that keep people coming back. I knew this getting into it, figuring that I’d be slaughtered for the couple hours I dared touch multi. However even with an uncustomized class I found myself being quite competitive and it didn’t take me long to get the required levels to unlock some decent kit and create my own class. By the end I felt I was nigh unstoppable with my character being almost grenade proof, able to take out enemies both near and far and even topping the servers a few times. I still find myself going back for a round or two every so often when I’ve got some time spare, and I think I will keep doing so for a while to come.
The question I keep asking myself is: was it worth missing out on this for so long just to save $40? Considering I had so many other games to play at the time I didn’t really miss playing Call of Duty: Black Ops but suffice to say those who were pestering me to play this game gave up long before I bought it and I haven’t seen one of them playing it since. Still despite that the game was very enjoyable and even managed to reverse my stance of not bothering with the multi-player in these kinds of games. In hindsight it would’ve been worth the cost of admission had I got it on day dot but I guess when principles and my wallet are both hit at the same time it’s enough to override my other impulses, no matter how strong they are.
Call of Duty: Black Ops might not break any new gaming ground or try very hard at being original but it’s still a blast to play, especially when you play it online. It’s not often that a game makes it into my bag of titles that I’ll come back to when I just want to blow an hour or two on something fun but I feel like Black Ops will be there for a while now, at least until the next one comes out. So if you’re a long time fan of the Call of Duty series or just FPSs in general you won’t go wrong with Black Ops and even if you’re not there’s still a good 8 hours of single player to be had, more than enough for gamers in today’s market.
Call of Duty: Black Ops is available right now on PC, Xbox360 and PlayStation 3 for $79, $68 and $68 respectively. Game was played on the second hardest difficulty setting with around 8 hours of total game time. Mutliplayer was played on multiple Australian servers with my most favored game mode being Team Deathmatch on Nuketown with around 6 hours of total play time and reaching level 18.
I remember getting my first ever phone with a data plan. It was 3 years ago and I remember looking through nearly every carrier’s offerings to see where I could get the best deal. I wasn’t going to get a contract since I change my phone at least once a year (thank you FBT exemption) and I was going to buy the handset outright, so many of the bundle deals going at the time weren’t available to me. I eventually settled on 3 mobile as they had the best of both worlds in terms of plan cost and data, totaling a mere $40/month for $150 worth of calls and 1GB of data. Still when I was talking to them about how the usage was calculated I seemed to hit a nerve over certain use cases.
Now I’m not a big user of mobile data despite my daily consumption of web services on my mobile devices, usually averaging about 200MB/month. Still there have been times that I’ve really needed the extra capacity like when I’m away and need an Internet connection for my laptop. Of course tethering the two devices together doesn’t take much effort at all, my first phone only needed a driver for it to work, and as far as I could tell the requests would look like they were coming directly from my phone. However the sales representatives told me in no uncertain terms that I’d have to get a separate data plan if I wanted to tether my handset or if I dared to plug my sim card into a 3G modem.
Of course upon testing these restrictions I found them to be patently false.
Now it could’ve just been misinformed sales people who got mixed up when I told them what I was planning to do with my new data enabled phone but the idea that tethered Internet usage is somehow different to normal Internet usage wasn’t a new idea to me. In the USA pretty much every carrier will charge you a premium on top of whatever plan you’ve got if you want to tether it to another device, usually providing a special application that enables the functionality. Of course this has spurred people to develop applications that circumvent these restrictions on all the major smart phone platforms (iOS users will have to jailbreak unfortunately) and the carriers aren’t able to tell the difference. But that hasn’t stopped them from taking action against those who would thwart their juicy revenue streams.
Most recently it seems that the carriers have been putting pressure on Google to remove tethering applications from the Android app store:
It seems a few American carriers have started working with Google to disable access to tethering apps in the Android Market in recent weeks, ostensibly because they make it easier for users to circumvent the official tethering capabilities offered on many recent smartphones — capabilities that carry a plan surcharge. Sure, it’s a shame that they’re doing it, but from Verizon’s perspective, it’s all about protecting revenue — business as usual. It’s Google’s role in this soap opera that’s a cause for greater concern.
Whilst this is another unfortunate sign that no matter how hard Google tries to be “open” it will still be at the mercy of the carriers their banning of tethering apps sets a worrying precedent for carriers looking to control the Android platform. Sure they already had a pretty good level of control over it since they all release their own custom versions of Android for handsets on their network but now they’re also exerting pressure over the one part that was ostensibly never meant to be influenced by them. I can understand that they’re just trying to protect their bottom line but the question has to be asked: is tethering really that much of a big deal for them?
It could be that my view is skewed by the Australian way of doing things, where data caps are the norm and the term “unlimited” is either a scam or at dial-up level speeds. Still from what I’ve seen of the USA market many wireless data plans come with caps anyway so the bandwidth argument is out the window. Tethering to a device requires no intervention from the carrier and there are free applications available on nearly every platform that provide the required functionality. In essence the carriers are charging you for a feature that should be free and are now strong-arming Google into protecting their bottom lines.
I’m thankful that this isn’t the norm here in Australia yet but we have an unhealthy habit of imitating our friends in the USA so you can see why this kind of behavior concerns me. Since I’m also a firm believer in the idea that once I’ve bought the hardware its mine to do with as I please and tethering falls under that realm. Tethering is one of those things that really shouldn’t be an issue and Google capitulating to the carriers just shows how difficult it is to operate in the mobile space, especially if you’re striving to make it as open as you possibly can.
Look I’m not going to say that I’m above rabid fan boy-ism. In fact there are multiple occasions where I’ve made up excuses for some of my companies of choice (notably Bioware and Sony) but I usually at least take the time to find out all the facts before disregarding them completely. Mostly I do this so I can use my opponent’s position against them, much like I’m doing in a fledgling tweet battle with one of my friends, but if I come across a hard line fact that I can’t get across I’ll do the requisite back flip and change my position on the matter. Like I did when the iPad sold like hot cakes.
However the latest storm comes from none other than the fan boys of that company. A couple days ago Apple released their latest version of their Integrated Development Environment (IDE) Xcode 4. Personally I wasn’t excited about the release since if I had my way I’d do everything in Visual Studio (and it means yet another 4GB download, eurrgh) but some of the features piqued my interest. The integration of Interface Builder into the core Xcode application is a welcome change as well as the improvements to the debugger and an intelligent error detection engine. I haven’t yet had a go with it but the reviews I had read so far are positive so I’m sure it will make my iPhone coding life a little easier.
However Apple made the controversial move of charging $4.99 for it through the Mac App store (it’s still free to developers who are paying $99/year). Whilst the barrier to entry for Xcode is well above that thanks to the Apple hardware tax it still pissed a good number of enthusiasts off since Apple doesn’t ship a compiler with OS X, leaving many to either go without or go the dark route of installing GCC themselves. Personally I didn’t care either way since I’m already well over $4000 in the hole just for the privilege of developing for iOS but what got my goad up was when people started comparing it to Visual Studio’s pricing.
Now since Visual Studio is aimed at corporations its pricing is, how would you say, corporately priced. The cheapest version you can find on the site is $549 a whopping 110 times the price of Apple’s offering. Now whilst I could argue that the value of Visual Studio is well worth the price of admission (and it is, even if it’s just for the debugger) you’d have to be a loon to pay that price if you just wanted to develop apps for a single platform. The reason behind this is because Microsoft offers up special platform specific versions of Visual Studio for free under the Express line of their products. There are 4 different versions on there currently and combined they cover pretty much all types of development on the Windows platform. Apple does not offer Xcode free in any form anymore so realistically the comparison to Visual Studio is apples to oranges, as one is either 110 times the cost or reversed its infinitely more expensive (literally).
Perhaps I’m getting too worked up over an issue that in reality means nothing, since most people who are retweeting this nonsense are probably not developers. But still when people show a blatant disregard for simple facts (hell even a simple Google search) it gets me all kinds of angry. Couple that with a complete lack of other inspiration for today’s post and you get this ranty, nigh on pointless post about Apple fan boys. I probably shouldn’t be so angry at those people who are simply retweeting the nonsense but it’s this exact kind of me-tooism that causes the kind of zero-value blogging that’s reducing the signal on the Internet to be nigh on indistinguishable from the noise.
In today’s rough and unforgiving economic climate many companies are seeking to reduce costs and improve their return on all previous investments that they’ve made. This, combined with several reports from market experts (Gershwin being a good example), has lead to an overall decrease in the amount of temporary workers hired and a push to bring a lot of talent in house. It would seem that the best option would be to secure employment now and skill up during these hard times and cash it all in when times come good again. You’d be crazy not to do it.
That is, unless you’re like me. I’m an IT contractor, and businesses will look at me first for the chop.
But what does trimming the contractors actually net for my employer? In my current position I’m doing what a contractor is supposed to be doing, filling a skill gap for either a temporary vacancy whilst they find a full time employee or bringing in additional skills required to implement various projects. Reducing your numbers of people like myself isn’t a bad thing, but it will reduce your capability to deliver on required projects. It would seem however that there are some places that are content to use contractors as full-time replacements. Using contractors in such a way is going to cost you much more than it would to properly fund the rightly skilled full time employee. However short term budgeting will show a cost saving with the contractor, since you’re not going to have to pay things like superannuation and insurance.
So what should employers be doing in order to whether these tough times? The answer isn’t what most employers want to hear, since they’ll be looking to reduce costs in the short term in the hopes that everything will come good. However, these are the factors that I have seen grab and retain exceptionally skilled people:
All these things will cost the employer something but in return they will get an employee who is loyal and willing to go that extra step for the company. I’ve seen many places with just one of the 3 above and they think that will keep their employees going. It will for a time but eventually they will start to desire more of these options, and if they’re determined they’ll find it.
I think this is why the Australian Public Service has a track record for keeping people for large periods of time. Whilst the salaries might not be the greatest (although they are pretty amazing for entry level workers) the flexible working arrangements and very clear career paths tend to keep people on for many years. I was a public servant for almost 3 years before I turned to private industry, and I couldn’t of done uni and full time work without the arrangements they had available.
After all this, if you still want to hire me remember this: I’m not a permanent replacement and I work for the highest bidder. It’s capitalism in its purest form, but I’ll be sure that you get your moneys worth.
I can’t guarntee that from all contractors though 🙂