It doesn’t seem that long ago that Felix Baumgartner leapt from his balloon 39KMs above the Earth’s surface, breaking Joseph Kittinger’s long standing record. The whole journey took only minutes and the entire journey back down captivated millions of people who watched on with bated breath. Curiously though we only saw one perspective of it for a long time, that of the observation cameras chronicling Felix’s journey. Now we can have front row seats to what Felix himself saw on the way down, including the harrowing spin that threatened to end everything in tragedy.
Cryptocurrencies and I have a sordid history. It began with me comparing BitCoin to a pyramid scheme, pointing out the issues that were obvious to many casual observers and receiving some good feedback in the process. Over time I became more comfortable with the idea, although still lamenting the volatility and obvious market speculation, and would go as far to say I was an advocate for it, wanting it to succeed in its endeavours. Then I met the community, filled with outright hostile individuals who couldn’t tolerate any criticism and acted like they were the victim of the oppressive government regime. I decided then that I wouldn’t bother blogging about BitCoin as much as I had done previously as I was just sick of the community that had grown around it.
Then came Dogecoin.
Dogecoin, for the uninitiated, is a scrypt based cryptocurrency (meaning that it’s a memory-hard based currency, so the ASICs and other mining hardware that BitCoiners have invested in is useless for mining it) which bears the mark of the Internet meme Doge. The community that sprung up around it is the antithesis of what the BitCoin community has become, with every toxic behaviour lampooned and everyone encouraged to have fun with the idea. Indeed getting into Dogecoin is incredibly simple with tons of guides and dozens of users ready and willing to help you out should you need it. Even if you don’t have the hardware to mine at a decent rate you can still find yourself in possession of hundreds, if not thousands, of Dogecoins in a matter of minutes from any number of the facet services. This has led to a community of people who aren’t the technically elite or those looking to profit, something which I believe led to the other cryptocurrency communities to become so toxic.
I myself hold about 20,000 Doge after spending about a week’s worth of nights mining on my now 3 year old system. Whilst I haven’t done much more than that it was far, far more than I had ever thought about doing with any other cryptocurrency. My friends are also much more willing to talk to me about Dogecoin than Bitcoin with a few even going as far to mine a few to fool around with on Reddit. Whether they will ever be worth anything doesn’t really factor into the equation but even with their fraction of a penny value at the moment there’s still been some incredible stories of people making things happen using them.
For most of its life though the structural issues that plagued BitCoin where also inherent in Dogecoin, albeit in a much less severe manner. The initial disparity between early adopters and the unwashed masses is quite a lot smaller due to Dogecoins initial virility but there was still a supposed limit of 100 billion coins which still made it deflationary. However the limit wasn’t actually enforced and thus, in its initial incarnation, Dogecoin was inflationary and a debate erupted as to what was going to be done. Today Dogecoin’s creator made a decision and he elected to keep it that way.
One of my biggest arguments against BitCoin was its deflationary nature, not because it’s not inflationary or whatever argument people think I have against it, more that the deflationary nature of BitCoin encouraged speculation and hoarding rather than spending. Whilst the inflation at this point is probably a little too high (I.E. the price instability is mostly due to new coin creation than much else) it does prevent people attempting to use Dogecoin as a speculative investment vehicle. Indeed the reaction from a lot of those who don’t “get” Dogecoin have been lamenting this change but in all honesty this is the best decision that could be made and shows the Dogecoin creators understand the larger (non-technical) issues that plague BitCoin.
Will this mean that Dogecoin will become the cryptocurrency of choice? Likely not as with most of these nascent technologies they’ll likely be superseded by something better that addresses all the issues whilst bringing new features that the old systems simply cannot support. Still the fact that there has been an explosion in altcoins shows that there’s a market out there for cryptocurrencies with feature sets outside of what BitCoin provides. Whether they win out all depends on where the market wants to head.
The story of AMD’s rise to glory on the back of Intel’s failures is well known. Intel, filled with the hubris that can only come from maintaining a dominate market position as long as they had, thought that the world could be brought into the 64bit world on the back of their brand new platform: Itanium. The cost for adopting this platform was high however as it made no attempts to be backwards compatible, forcing you to revamp your entire software stack to take advantage of it (the benefits of which were highly questionable). AMD, seeing the writing on the wall, instead developed their x86-64 architecture which not only promised 64bit compatibility but even went as far as to outclass then current generation Intel processors in 32bit performance. It was then an uphill battle for Intel to play catchup with AMD but the past few years have seen Intel dominate AMD in almost every metric with the one exception of performance per dollar at the low end.
That could be set to change however with AMD announcing their new processors, dubbed Kaveri:
On the surface Kaveri doesn’t seem too different from the regular processors you’ll see on the market today, sporting an on-die graphics card alongside the core compute units. As the above picture shows however the amount of on die space dedicated to said GPU is far more than any other chip currently on the market and indeed the transistor count, which is a cool 2.1 billion, is a testament to this. After that however it starts to look more and more like a traditional quad core CPU with an integrated graphics chip, something few would get excited about, but the real power of AMD’s new Kaveri chips comes from the architectural changes that underpin this insanely complex piece of silicon.
The integration of GPUs onto CPUs has been the standard for some years now with 90% of chips being shipped with an on-die graphics processor. For all intents and purposes the distinction between them and discrete units are their location within the computer as they’re essentially identical at the functional level. There is some advantages gained due to being so close to the CPU (usually to do with latency that’s eliminated by not having to communicate over the PCIe bus) but they’re still typically inferior due to the amount of die space that can be dedicated to them. This was especially true of generations previous to the current one which weren’t much better than the integrated graphics cards that shipped with many motherboards.
Kaveri, however, brings with it something that no other CPU has managed before: a unified memory architecture.
Under the hood under every computer is a whole cornucopia of different styles of memory, each with their own specific purpose. Traditionally the GPU and CPU would each have their own discrete pieces of memory, the CPU with its own pool of RAM (which is typically what people refer to) and the GPU with similar. Integrated graphics would typically take advantage of the system RAM, reserving part a section for its own use. In Kaveri the distinction between the CPU’s and GPUs memory is gone, replaced by a unified view where either processing unit is able to access the others. This might not sound particularly impressive but it’s by far one of the biggest changes to come to computing in recent memory and AMD is undoubtedly the pioneer in this realm.
GPUs power comes from their ability to rapidly process highly parallelizable tasks, examples being things like rendering or number crunching. Traditionally however they’re constrained by how fast they can talk with the more general purpose CPU which is responsible for giving it tasks and interpreting the results. Such activities usually involve costly copy operations that flow through slow interconnects in your PC, drastically reducing the effectiveness of a GPU’s power. Kaveri CPUs on the other hand suffer from no such limitations allowing for seamless communication between the GPU and the CPU enabling them both to perform tasks and share results without the traditional overhead.
The one caveat at this point however is that software needs to be explicitly coded to take advantage of this unified architecture. AMD is working extremely hard to get low level tools to support this, meaning that programs should eventually be able to take advantage of it without much hassle, however it does mean that the Kaveri hardware is arriving long before the software will be able to take advantage of it. It’s sounding a lot like an Itanium moment here, for sure, but as long as AMD holds good to their promises of working with tools developers to take advantage of this (whilst retaining the required backwards compatibility) this has the potential to be another coup for AMD.
If the results from the commercial units are anything to go by then Kaveri looks very promising. Sure it’s not a performance powerhouse but it certainly holds its own against the competition and I’m sure once the tools catch up you’ll start to see benchmarks demonstrating the power of a unified memory architecture. That may be a year or two out from now but rest assured this is likely the future for computing and every other chip manufacturer in the world will be rushing to replicate what AMD has created here.
Ever since Microsoft and Nokia announced their partnership with (and subsequent acquisition by) Microsoft I had wondered when we’d start seeing a bevy of feature phones that were running the Windows Phone operating system behind the scenes. Sure there’s a lot of cheaper Lumias on the market, like the Lumia 520 can be had for $149 outright, but there isn’t anything in the low end where Nokia has been the undisputed king for decades. That section of the market is now dominated by Nokia’s Asha line of handsets, a curious new operating system that came into being shortly after Nokia canned all development on Symbian and their other alternative mobile platforms. However there’s long been rumours circling that Nokia was developing a low end Android handset to take over this area of the market, predominately due to the rise of cheap Android handsets that were beginning to trickle in.
The latest leaks from engineers within Nokia appear to confirm these rumours with the above pictures showcasing a prototype handset developed under the Normandy code name. Details are scant as to what the phone actually consists of but the notification bar in does look distinctly Android with the rest of the UI not bearing any resemblance to anything else on the market currently. This fits in with the rumours that Nokia was looking to fork Android and make its own version of it, much like Amazon did for the Kindle Fire, which would also mean that they’d likely be looking to create their own app store as well. This would be where Microsoft could have its in, pushing Android versions of its Windows Phone applications through its own distribution channel without having to seek Google’s approval.
Such a plan almost wholly relies on the fact that Nokia is the trusted name in the low end space, managing to command a sizable chunk of the market even in the face of numerous rivals. Even though Windows Phone has been gaining ground recently in developed markets it’s still been unable to gain much traction in emerging markets. Using Android as a trojan horse to get uses onto their app ecosystem could potentially work however it’s far more likely that those users will simply remain on the new Android platform. Still there would be a non-zero number who would eventually look towards moving upwards in terms of functionality and when it comes to Nokia there’s only one platform to choose from.
Of course this all hinges on the idea that Microsoft is actively interested in pursuing this idea and it’s not simply part of the ongoing skunk works of Nokia employees. That being said Microsoft already makes a large chunk of change from every Android phone sold thanks to its licensing arrangements with numerous vendors so they would have a slight edge in creating a low end Android handset. Whether they eventually use that to try and leverage users onto the Windows Phone platform though will be something that we’ll have to wait to see as I can imagine it’ll be a long time before an actual device sees the light of day.
There are few things I find more enjoyable than putting together a new PC. It starts off with the chase where I determine my budget and then start chasing down the various components that will make up the final system. Then comes the verification where I trawl through dozens upon dozens of reviews to ensure the I’ve selected only the best products for their price bracket. Finally the time comes when I purchase all the components, hopefully from a single vendor with price matching, and then after the components arrive I’ll begin the immensely enjoyable task of assembling my (or someone else’s) new PC. Nothing quite beats the feeling of seeing Windows boot up for the first time on a new bit of hardware you just finished building.
Of course I realise that the vast majority of the world doesn’t enjoy engaging in such activities, especially if all you’re doing with your PC is watching movies or doing the occasional bit of word processing, and this is typically when I’ll send them to any one of a number of PC manufacturers who can give them a solid device with a long warranty. My gamer buddies will typically get me to validate their builds and, if they don’t feel up to the task, get me to build it or simply stick to consoles which provide a pretty good experience for much of their useful life. This is why I think Razer’s Project Christine is trying to target a market that just doesn’t exist as it sits in between already well defined market segments that are both already well serviced.
Project Christine is, as a concept, a pretty interesting idea. All the core components that make up a PC (RAM, storage, graphics card, etc.) have been modularized allowing almost anyone to build up a custom PC of their liking without the requisite PC building experience. The design is somewhat reminiscent of the Thermaltake Level 10 which used the compartmentalization of different parts to improve the cooling as well as to make maintenance easier. Razer’s concept takes this idea to the extreme, effectively commoditizing some of the skills required to build a high end gaming PC whilst still retaining the same issues around configuration, like knowing which components are the best bang for buck at the time.
Razer could potentially head off that second issue by going ahead with their subscription based model for upgraded parts. The idea would be that after you’ve bought whatever model you wanted (this service appears to be targeted to the high end) then you pay a monthly subscription feed to get the latest and greatest parts delivered to you. For the ultimate in hardcore gamers this could be somewhat attractive however it’d likely be an extremely expensive service to opt in to as the latest PC components are rarely among the cheapest or best value. Still if you’ve got a lot of money and not a whole lot of time then it could be of use to you except the fact that you’ve invested so much in a gaming rig typically means you have enough time to make use of it.
This is where I feel Project Christine falls down as the target market is a demographic of people who are interested in configuring their computer right up to the point of physically building it. Whilst I don’t really have any facts to back up this next assertion it has been my experience that people of this nature are either already well serviced by custom build services (which most PC shops provide) or know someone with the capabilities to do it. Sure the modular nature of the Christine is pretty awesome, and it certainly makes a striking impression, however that also means you need to wait for Razer to Christine-ify parts before they’ll be available to you. You might be able to crack them open and do the upgrade yourself but then you’re really only one step away from doing a full PC build anyway.
With consoles and PCs lasting longer and longer as time goes by concepts like Project Christine seem to be rooted in the past idea that a gaming PC needed constant upgrades to remain viable. That simply hasn’t been the case for the better part of a decade and whilst the next generation of consoles might spur an initial burst in PC upgrades it’s doubtful that the constant upgrade cycle will ever return. Project Christine might find itself with a dedicated niche of users but I really don’t believe it will be large enough to be sustainable, even with the Razer name behind it.
Rewind back a couple years that the idea of wearable computing was something reserved for the realms of the ultra-geek and science fiction. Primarily this was a function of the amount of computing power and power capacity we could stuff into a gadget that anyone would be willing to wear as anything that could be deemed useful was far too bulky to be anything but a concept. Today the idea is far more mainstream with devices like Google Glass and innumerable smart watches flooding the market but that seems to be as far as wearable technology goes now. Should Intel have its way though this could be set for a rapid amount of change with the announcement of Intel Edison, a x86 processor that comes in a familiar (and very small) package.
It’s an x86 processor the size of a SD card and included in that package is a 400MHz processor (for the sake of argument I am assuming that it’s the same SOC that powers Intel’s Galileo platform, just a 22nm version), WiFi and low power Bluetooth. It can run a standard version of Linux and, weirdly enough, even has its own little app store. Should it retain its Galileo roots it will also be Arduino compatible whilst also gaining the capability to run the new Wolfram programming language. Needless to say it’s a pretty powerful little package and the standard form factor should make it easy to integrate into a lot of products.
By itself the Edison doesn’t suddenly make all wearable computing ideas feasible, indeed the progress made in this sector in the last year is a testament to that, instead it’s more of an evolutionary jump that should help to jump start the next generation of wearable devices. We’ve been able to go far with devices that have a tenth of the computing power of the Edison so it will be interesting to see what kinds of applications are made possible by the additional grunt it gives. Indeed Intel believes strongly in the idea that Edison will be the core of future wearable devices and has set up the Make It Wearable challenge, with over $1 million in prizes, in order to spur product designers on.
It will be interesting to see how the Edison stacks up against the current low power giant ARM as they have a bevy of devices already available that would be comparable to the Edison. Indeed it seems that Edison is meant to be a shot across ARM’s bow as it’s one of the few devices that Intel will allow third parties to license, much in the same way as ARM does today. There’s no question that Intel has been losing out hard in this space so the idea of marketing the Edison towards the wearable computing sector is likely a coy play to carve out a good chunk of that market before ARM cements themselves in it (like they did with smart phones).
One thing is for certain though, the amount of computing power available in such small packages is on the rise enabling us to integrate technology into more and more places. It’s the first tenuous steps towards creating an Internet of Things where seamless and unbounded communication is possible between almost any device. The results of Intel’s Make It Wearable competition will be a good indication of where this market is heading and what we, the consumers, can expect to see in the coming years.
Back in July David Cameron announced that he’d be ensuring that all ISPs within the United Kingdom would implement a mandatory filtering scheme. The initiative drew a lot of negative attention, including a post from yours truly, as the UK’s citizens were rightly outraged that the government felt the need to fiddle with their Internet connections. The parallels between Cameron’s policy and that of the Clean Feed here in Australia were shocking in their similarity and I, like many others, thought that it’d likely never see the light of day. Unfortunately though it appears that not only has Cameron managed to get the big 4 Internet providers on board he’s also managed to broaden the scope far beyond its original intentions, much to the chagrin of everyone.
The base principle behind this initiative appears to be the same as the Clean Feed: to protect children from the vast swaths of objectionable content that reside on the Internet. Probably the biggest difference between however stems from its implementation as the Clean Feed was going to be enforced through legislation (although that later changed when it couldn’t pass parliament) Cameron’s filter is instead a voluntary code of practice that ISPs can adhere to. If the same thing was introduced in Australia it would be likely that none would support it however in the UK nearly all of the major suppliers have agree to implement it. The problem with this informal system though is that the scope of what should and should not be blocked isn’t guarded by any kind of oversight and, predictably, the scope has started to creep far beyond it’s initial goals.
Among the vast list of things that are making their way onto the list of “objectionable” content are such legitimate sites including sex education sites and even the UK equivalents of sites like Kids Helpline. Back when Conroy first proposed the filter this kind of scope creep was one of the biggest issues that many of us had with the proposal as the process by which they made the list was secretive and the actual list itself, even though it was eventually made public, was also meant to be kept from the general public. Cameron’s initiative does the same and, just as everyone was worried about, the list of objectionable content has grown far beyond what the general public was told it would. It’s happened so quickly that many have said (and rightly so) that it was Cameron’s plan all along.
If you ever had any doubts about just how bad the Clean Feed would have been in Australia then the UK’s initiative should serve as a good example of what we could have expected. The rapid expansion from a simple idea of protecting children from online pornography has now morphed into a behemoth where all content either fits into someone’s idea of what’s proper and what’s not. It’s only a matter of time before some politically sensitive content makes it onto the objectionable list, turning the once innocent filter into a tool of Orwellian oppression. I’d love to be proved wrong on this but I can’t say I’m hopeful given that the slippery slope that many of us predicted came true.
Fight this, citizens of the UK.
Governments often avoid long term policy goals for fear of never seeing them completed. This unfortunately means that large infrastructure projects fall by the wayside as it’s unlikely that they’ll be finished in a single term, leaving a potential political win on the table for an incoming government. The National Broadband Network then was something of an oddity, forced into being due to the lack of interest the private sector showed in building it (despite heavy government funding) it was one of the few examples of a multi-term policy that would have tangible benefits for all Australians. Like any big project it had its issues but I, and many others, still thought it was worth the investment.
If you were to believe the Liberal’s rhetoric of the past couple years however you’d likely be thinking otherwise. Whilst the initial volleys launched at the NBN were mostly focused on the fact that it was an expensive ploy by Labor to buy votes it soon metastasised into a fully fledged attack that had little rhyme or reason. It’s ultimate form was the Liberal’s FTTN NBN, a policy which many saw as a half hearted attempt to placate Liberal voters who saw the NBN as an expensive Labor policy whilst trying to retain the tech vote which they had spent so many years losing. After they got into government however many of us, myself included, thought that it was all a load of hot air and that they’d simply continue with the current NBN plan, possibly with someone else building it.
Oh how wrong we all were.
I mentioned last week that Turnbull needed to start listening to the evidence that was piling up that the FTTP NBN was the way to go, figuring that the unbiased strategic review would find in favour of it given the large body of evidence saying so. However the report was anything but saying that the current NBN plan was woefully behind schedule and would likely end up costing almost 50% more than it was currently expected to. The new NBNCo board then recommended a plan of action that looked frightfully similar to that of the Liberal’s FTTN NBN, even touting the same party lines of faster, cheaper and sooner. Needless to say I have some issues with, not least of which is the fact that it seems to be wildly out of touch with reality.
For starters I find it extremely hard to believe that NBNCo, a highly transparent company who’s financials have been available for scrutiny for years, would be unaware of a cost blow out exceeding some $28 billion. The assumption for the cost blow out seems to stem from an ill formed idea that the cost per premise will increase over time, something which is the exact opposite of reality. There also seems to be a major disconnect between the Liberal’s figures on take up rates and plan speeds which makes it appear like there’s a huge hole in the revenue that NBNCo would hope to generate. Indeed if we look at the 2013-2016 corporate plan the figures in there are drastically different to the ones the review is using, signalling that either NBNCo was lying about it (which they weren’t) or the strategic review is deliberately using misleading figures to suit an agenda.
I won’t mince words here as it’s clear that many aspects of the review have a political agenda behind them. The $28 billion blowout in the FTTP NBN seems to have been calculated to make the $11 billion increase in peak funding for the Liberal’s NBN seem a lot more palatable, even though its cost is now basically the same as the original costings for the FTTP NBN. Honestly we should have expected this when the majority of the new NBNCo board is staffed with former executives from telcos who have large investments in Hybrid Fiber Coaxial networks, something which the new NBN will be on the hook for (even though the Liberals seem to think they’ll get those for free).
In short the review is laughable, an exercise in fudging numbers to suit a political agenda that has absolutely zero groundings in reality. The end of it is that we, the Internet users of Australia, will get horrendously screwed with outdated technology that will have to be replaced eventually anyway and at a cost that will far exceed that of a pure FTTP solution. Of course it’s now clear that it was never Turnbull’s intention to do a fair and honest review and was only interested in being given evidence to support his skewed view of technology.
Convincing the wider tech community that the the FTTN NBN is a bad idea isn’t exactly a hard task as anyone who’s worked in technology understands the fundamental benefits of a primarily fibre network over one that’s copper. Indeed even non-technical users of Australia’s current broadband network are predominately in favour of the fully fibre solution knowing that it will lead to a better, more reliable service than anything the copper network can deliver. The was a glimmer of hope back in September when Turnbull commissioned NBNco to do a full report on the current rollout and how that would compare to his FTTN solution however his reaction to a recent NBNco report seems to show otherwise.
The document in question is a report that NBNCo prepared during the caretaker period that all government departments enter prior to an election. The content of the document has been rather devastating to the Coalition’s stance that FTTN can be delivered faster and cheaper with NBNCo stating in no uncertain terms that they would not be able to meet the deadlines promised before the election. Additionally many of the fundamental problems with the FTTN solution were also highlighted which should be a very clear signal to Turnbull that his solution is simply not tenable, at least in its current form.
However Turnbull has done as much as he can to discredit this report, taking the stance that it was heavily outdated and written over 6 months ago. However this is clearly not the case as there’s ample evidence that it was written recently, even if it was during the recent caretaker period (where, you could potentially argue, that NBNCo was still under the influence of Labor). In all honesty though the time at which it was written is largely irrelevant as the criticisms of it have been echoed by myself and other IT pundits for as long as the Coalition has spruiked their FTTN policy.
Worse still the official NBNCo report, which Turnbull has previously stated he’ll bind himself to, was provided to him almost 2 weeks ago and hasn’t seen the light of day since. It was even brought up during question time during a recent sitting of parliament and Turnbull was defiant in his stance to not release it. We’ll hopefully be getting some insight into what the report actually contains tomorrow as a redacted version of the report will be made available to some journalists. For someone who wanted a lot more transparency from NBNCo he is being awfully hypocritical as, if he was right about FTTN being cheaper and faster to implement, would have supported that view. The good money is then on the fact that the report is far more damning about the Coalition’s policy than Turnbull had hoped it’d be.
If Turnbull wants to keep any shred of creditability with the technically inclined voters he’s going to have to fess up sooner or later that the Coalition’s policy was a non-starter and pursuing the FTTP solution is the right way to go. Heck he doesn’t even have to do the former if he doesn’t want to but putting his stamp on the FTTP NBN would go a long way to undoing the damage to his reputation as the head of technology for Australia. I guess we’ll know more about why he’s acting the way he is tomorrow.
I had given up on writing on BitCoin because of the rather toxic community of people that seemed to appear whenever I wrote about it. They never left comments here, no instead they’d cherry pick my articles and then never attempt to read any of my further writings on the subject and then labelling me as a BitCoin cynic. It had gotten to the point where I simply couldn’t stomach most BitCoin articles because of the ensuing circlejerks that would follow afterwards where any valid criticism would be met with derision usually only found in Call of Duty matches. But the last couple months of stratospheric growth and volatility have had me pulling at my self impost reigns, just wanting to put these zealots in their place.
Since I can’t find anything better to post about it seems that today will be that day.
The last time I posted about BitCoins they were hovering around $25 (this was at the start of the year, mind you), a price that was not seen for a long time previously. It began a somewhat steady tend upwards after that however it then had another great jump into the $100~$200 range something I long expected to be completely unsustainable. It managed to keep around that area for a long time however but the end of October saw it begin an upward trend that didn’t show many signs of stopping until recently and the past couple weeks have been an insane roller coaster ride of fluctuating prices that no currency should ever undergo.
Much of the initial growth was attributed to the fact that China was now becoming interested in BitCoins and thus there was a whole new market of capital being injected into the economy. Whilst this might have fuelled the initial bump we saw back in the end of October the resulting stratospheric rise, where the price doubled in under a month, could simply not be the result of new investors buying into the market. The reasoning behind this is the fact that the transaction volumes did not escalate at a similar pace meaning those ridiculously unsustainable growth rates were driven by speculative investors looking to increase the value of their BitCoin portfolios, not a growing investor base.
The anointed champions of BitCoin won’t have a bar of that however, even when the vast majority of forums were flooded with people who were crying when they cashed out at $400, lamenting the fact they could have had 3 times more if they’d only waited another week. As I’ve said dozens of times in the past the fact that the primary use of BitCoin right now is speculative investment is antithetical to its aspirations to become a true currency. Indeed the fact that it’s deflationary means that it inherently encourages this kind of action rather than being a medium for the transfer of wealth between parties. Indeed the inflationary aspect of fiat currencies, which BitCoiners seem to hate for some reason, encourages people to spend it rather than simply hanging on to it.
The flow on effect of this rampant speculation is the wild fluctuations in value which make using it incredibly difficult for businesses. Indeed any business that was selling goods for BitCoin prior to the current crash has lost money on any goods they sold simply because of the fluctuations in price. Others would argue that typically the retailers are better off because the price of BitCoin trends upwards but history has shown that you simply can’t rely on that and it’s guaranteed that unless you exchange your BitCoins for hard currency immediately after purchases you’re likely to hit a period of instability where you’ll end up on the losing end of the equation.
Whilst I’m sure I’ve lost all the True BitCoin Believers at this point I feel I have to make the point that I think the idea of cryptocurrencies are great as they’d be a great alternate method for transferring wealth across the world. BitCoin has some fundamental issues, many of which can’t be solved by a simple work around here or there, and as such whilst I won’t advocate its wholesale abandonment I would encourage the development of alternatives to address these issues. Unfortunately none have been particularly forthcoming but as BitCoin continues to draw more attention to itself I can’t imagine they’re too far off and then hopefully we can have the decentralized method of transferring wealth all BitCoiners like to talk about.