Technology

AlphaGo Beats Lee Sedol 4-1.

Back in January AlphaGo’s 5-0 defeat of Fan Hui, the reigning European champion, was a shot out of left field. Go players and AI developers alike believed that we were still some 10 years away from that feat ever occurring and the resounding defeat of a champion was rather unexpected. However many still expected the current long time champion, Lee Sedol, to come out on top given his much higher ranking. The battle was set to be decided in the same game format, 5 games with 2 hours of time for each side. Over the last week AlphaGo and Lee Sedol have been facing off in match after match and AlphaGo has emerged victorious, winning 4 out of the 5 games.

27024-cadfd4e7_665_400

Just like when Kasparov lost to Deep Blue AlphaGo’s victory has sent ripples through both the computing and Go communities. For technologists like me it’s a signal that we’ve made another leap forward in our quest for strong AI as we’ve developed better methods for training neural networks. The Go community is less enthusiastic about it however, coming to terms with the fact that not even their game of choice is beyond AI’s capabilities. What is interesting to see however is the conversation around AlphaGo’s style of play and the near universal idea that it has some fundamental weaknesses that top Go players will look to exploit.

Indeed Lee Sedol’s one win against AlphaGo shows that it’s no where near being the perfect player and its play style needs refinement. It seems that AlphaGo tends to calculate the most advantageous moves for both itself and for its opponent, using this as the basis for judging its future moves. However unexpected moves, ones that were pruned out of its search tree due to them being sub-optimal for its opponent, seem to throw it for a loop. This is similar to how Kasparov initially beat Deep Blue, playing moves that sent it down a non-optimal search path before making his own, far more optimal, moves. Whether or not this can be developed into a viable strategy is something I’ll leave up to the reader, but suffice to say I don’t think it’d remain a weakness for too long.

For some though Lee Sedol’s loss is merely a symbolic one as the real current champion is Ke Jie, who has a 8-2 record against AlphaGo’s last opponent. Whilst I can’t really comment on how much better of a player he is (I don’t follow Go at all) AlphaGo’s almost 5-0’d Lee Sedol and I’m sure it’d give Ke Jie a solid run for his money. I’m sure AlphaGo will continue to make appearances around the world and I’m eager to see if it can still come out on top.

One interesting thing to note is that AlphaGo did receive a little boost in computing power when facing off against Lee Sedol, getting another 700 CPUs and 30 GPUs to handle the additional calculations. However that extra hardware might not have been strictly required as the AlphaGo team has said that a single laptop version can beat their distributed one about 30% of the time. Regardless it seems the AlphaGo team thought Lee Sedol was going to be a much tougher challenge than Fan Hui and gave their AI a little boost just to be sure.

The AlphaGo team won’t be resting on their laurels after this however as they’ve got their sites set on bigger challenges, like StarCraft. I’m very much looking forward to seeing them attempt the not-so-traditional games as I think they’re a far more interesting challenge with many more potential applications.

Extreme Ultraviolet Lithography May Have a Chance to Shine Afterall.

It seems I can’t go a month without seeing at least one article decrying the end of Moore’s Law and another which shows that it’s still on track. Ultimately this dichotomy comes from the fact that we’re on the bleeding edge of material sciences with new research being published often. At the same time however I’m always sceptical of those saying that Moore’s Law is coming to an end as we’ve heard it several times before and, every single time, those limitations have been overcome. Indeed it seems that one technology even I had written off, Extreme Ultraviolet Lithography, may soon be viable.

asml_NXE-3100_print

Our current process for creating computing chips relies on the photolithography process, essentially a light that etches the transistor pattern onto the silicon. In order to create smaller and smaller transistors we’ve had to use increasingly shorter wavelengths of light. Right now we use deep ultraviolet light at the 193nm wavelength which has been sufficient for etching features all the way down to 10nm level. As I wrote last year with current technology this is about the limit as even workarounds like double-patterning only get us so far, due to their expensive nature. EUV on the other hand works with light at 13.5nm, allowing for much finer details to be etched although there’s been some significant drawbacks which have prevented its use in at-scale manufacturing.

For starters producing the required wattage of light at that wavelength is incredibly difficult. The required power to etch features onto silicon with EUV is around 250W, a low power figure to be sure, however due to nearly everything (including air) absorbing EUV the initial power level is far beyond that. Indeed even in the most advanced machines only around 2% of the total power generated actually ends up on the chip. This is what has led ASML to develop the exotic machine you see above in which both the silicon substrate and the EUV light source work in total vacuum. This set up is capable of delivering 200W which is getting really close to the required threshold, but still requires some additional engineering before it can be utilized for manufacturing.

However progress like this significantly changes the view many had on EUV and its potential for extending silicon’s life. Even last year when I was doing my research into it there weren’t many who were confident EUV would be able to deliver, given its limitations. However with ASML projecting that they’ll be able to deliver manufacturing capability in 2018 it’s suddenly looking a lot more feasible. Of course this doesn’t negate the other pressing issues like the interconnect widths bumping up against physical limitations but that’s not a specific problem to EUV.

The race is on to determine what the next generation of computing chips will look like and there are many viable contenders. In all honesty it surprised me to learn that EUV was becoming such a viable candidate as, given its numerous issues, I felt that no one would bother investing in the idea. It seems I was dead wrong as ASML has shown that it’s not only viable but could be used in anger in a very short time. The next few node steps are going to be very interesting as they’ll set the tempo for technological progress for decades to come.

Google Provides Insight Into SSD Reliability.

SSDs may have been around for some time now but they’re still something of an unknown. Their performance benefits are undeniable and their cost per gigabyte has plummeted year after year. However, for the enterprise space, their unknown status has led to a lot of hedged bets when it comes to their use. Most SSDs have a large portion of over provisioned space, to accommodate for failed cells and wear levelling. A lot of SSDs are sold as “accelerators”, meant to help speed up operations but not hold critical data for any length of time. This all comes from a lack of good data on their reliability and failure rates, something which can only come with time and use. Thankfully Google has been doing just that and at a recent conference released a paper about their findings.

SSDs

 

The paper focused on three different types of flash media: the consumer level MLC, the more enterprise focused SLC and the somewhere-in-the-middle eMLC. These were all custom devices, sporting Google’s own PCIe interface and drivers, however the chips they used were your run of the mill flash. The drives were divided into 10 categories: 4 MLC, 4 SLC and 2 eMLC. For each of these different types of drives several different metrics were collected over their 6 year lifetime: raw bit error rate (RBER), uncorrectable bit error rate (UBER), program/erase cycles and various failure rates (bad blocks, bad cells, etc.). All of these were then collated to provide insights into the reliability of SSDs and their comparison to each other and to old fashioned, spinning rust drives.

Probably the most stunning finding out of the report is that, in general, SLC drives are no more reliable than their MLC brethren. For both enterprises and consumers this is a big deal as SLC based drives are often several times the price of their MLC equivalent. This should allay any fears that enterprises had about using MLC based products as they will likely be just as reliable and far more cheaper. Indeed products like the Intel 750 series (one of which I’m using for big data analysis at home) provide the same capabilities as products that cost ten times as much and, based on Google’s research, will last just as long.

Interestingly the biggest predictive indicator for drive reliability wasn’t the RBER, UBER or even the number of PE cycles. In fact the most predictive factor of drive failure was the physical age of the drive itself. What this means is that, for SSDs, there must be other factors at play which affect drive reliability. The paper hypothesizes that this might be due to silicon aging but it doesn’t appear that they had enough data to investigate that further. I’m very much interested in how this plays out as it will likely come down to the way they’re fabricated (I.E. different types of lithography, doping, etc.), something which does vary significantly between manufacturers.

It’s not all good news for SSDs however as the research showed that whilst SSDs have an overall failure rate below that of spinning rust they do exhibit a higher UBER. What this means is that SSDs will have a higher rate of unrecoverable errors which can lead to data corruption. Many modern operating systems, applications and storage controllers are aware of this and can accommodate it but it’s still an issue for systems with mission/business critical data.

This kind of insight into the reliability of SSDs is great and just goes to show that even nascent technology can be quite reliable. The insight into MLC vs SLC is telling, showing that whilst a certain technology may exhibit one better characteristic (in this case PE cycle count) that might not be the true indicator of reliability. Indeed Google’s research shows that the factors we have been watching so closely might not be the ones we need to look at. Thus we need to develop new ideas in order to better assess the reliability of SSDs so that we can better predict their failures. Then, once we have that, we can work towards eliminating them, making SSDs even more reliable again.

HTC Vive Will Debut at $799.

The $599 price tag of the consumer Oculus Rift was off putting to many, including myself. It’s not that we expected the technology to be cheap, more that our expectations were set at what we considered a much more reasonable level. I wrote at the time that HTC and Sony would likely rush in with their own VR headsets swiftly afterwards, likely a much lower price point, to take advantage of the Oculus’ more premium status. I was right on one count, HTC has since announced theirs, but at the higher price point of $799. It seems that, at this stage in the game, there’s no way to do VR on the cheap.

VR_Web_Product_HMD

Whilst the two products are largely comparable in terms of raw specifications, having the same screens for each eye and both providing the same level of “sit down” VR experience. However the Vive pulls ahead of the Oculus in two respects, the first of which being the inclusion of two hand tracking controllers. The current version of the Oculus includes an XboxOne controller with their Touch controllers due out sometime later this year (at a currently undisclosed price). However what really sets the HTC Vive apart from the Oculus is the inclusion of two Lighthouse tracking base stations which allow the Vive to do full body tracking in a 16m² space.

These two additions explain the price gap between the two headsets, however it also shows that there’s a floor price when it comes to VR headsets. I had honestly thought that both HTC’s and Sony’s offerings would come in at a cheaper price point than the Oculus however now I’m not so sure. Sony may be able to cut some corners due to the stable hardware platform they’ll be working with (the PS4) however I don’t think that will make it that much cheaper. Indeed looking at the current specs of the PlayStation VR shows that the only real difference at this point is the slightly lower screen resolution (although it does support 120hz, superior to the Oculus and Vive). With that in mind we’d be lucky to see it much, if at all, below the $599 price point that Oculus set last month.

So for Oculus debuting at the price point that they chose might not have been the disaster I first thought it would’ve been. Oculus might very well have developed the Model-T of VR that everyone was hoping for, it just ended up costing a lot more than we’d hoped it would. For many though I still feel like this will mean they’ll give the V1.0 VR products a miss, instead waiting for economies of scale to kick in or a new player to enter the market at a cheaper price point. This will hamper the adoption of VR, and by extension titles developed for VR, in the short term. However after a year or two there’s potential for newer models and the secondary market for used headsets to start ramping up, potentially opening up access to customers who had abstained previously.

For myself I think I’ll have to wait to be convinced that the investment in a VR headset will be worth it. I bought a Xbox just so I could play Mass Effect when it first came out and, should something of similar calibre find itself on any one of the VR platforms, I can see myself doing the same again. However right now the relatively high price point coupled with the lack of enticing titles or killer apps I’m not really willing to make such an investment in a V1.0 product. I, as always, remain willing to have my opinion changed and, by consequence, my wallet opened.

First They Came for Chess, Now They Come for Go.

Computers are better than humans at a lot of things but there are numerous problem spaces where they struggle. Anything with complex branching or large numbers of possibilities forces them into costly jumps, negating the benefits of their ability to think in microsecond increments. This is why it took computers so long from beating humans at something like tic-tac-toe, a computationally simple game, to beating humans at chess. However one game has proven elusive to even the most cutting edge AI developers, the seemingly simple game Go. This is because unlike chess or other games, which often rely on brute forcing out many possible moves and calculating the best one, Go has an incomprehensibly large number of possible moves making such an approach near impossible. However Google’s DeepMind AI, using their AlphaGo algorithms, has successfully defeated the top European player and will soon face its toughest challenge yet.

Unlike previous game playing AIs, which often relied on calculating board scores of potential moves, AlphaGo is a neural network that’s undergone whats called supervised learning. Essentially they’ve taken professional level Go games and fed their moves into a neural network. Then it’s told which outcomes lead to success and which ones don’t, allowing the neural network to develop it’s own pattern recognition for winning moves. This isn’t what let them beat a top Go player however as supervised learning is a well established principle in the development of neural networks. Their secret sauce appears to be a combination of an algorithm called Monte Carlo Tree Search (MCTS) and the fact that they pitted the AI against itself in order for it to get better.

MCTS is a very interesting idea, one that’s broadly applicable to games with a finite set of moves or those with set limits on play. Essentially what a MCTS will do is select moves at random and play them out until they’re finished. Then, when the outcome of that play out is determined, the moves made are then used to adjust the weightings of how successful those potential moves were. This, in essence, allows you to determine what set of moves are most optimal by refining down the problem space to what is the most ideal set. Of course the tradeoff here is between how long and deep you want the network to search and how long you have to decide to make a move.

This is where the millions of games that AlphaGo played against itself comes into play as it allowed the both the neural networks and the MCTS algorithm to be greatly refined. In their single machine tests it only lost to other Go programs once out of almost 500 games. In the match played against Fan Hui however he was matched against a veritable army of hardware, some 170 GPUs and 1200 CPUs. That should give you some indication of just how complex Go is and what it’s taken to get to this point.

AlphaGo’s biggest challenge is ahead of it though as it prepares to face down the current top Go player of the last decade, Lee Sedol. In terms of opponents Lee is an order of magnitude higher being a 9th Dan to Fan’s 2nd Dan. How they structure the matches and their infrastructure to support AlphaGo will be incredibly interesting but whether or not it will come out victorious is anyone’s guess.

Bitcoin at a Crossroads.

Despite what others seem to think I’ve always liked the idea behind cryptocurrencies. A decentralized method of transferring wealth between parties, free from the influence of outside parties, has an enormous amount of value as a service. Bitcoin was the first incarnation of this idea to actually work, creating the ideas that power the proof-of-work system and the decentralized nature that was critical to its success. However the Bitcoin community and I soon parted ways as my writings on its use as a speculative investment vehicle rubbed numerous people the wrong way. It seems that the tenancy to run against the groupthink runs all the way to the top of the Bitcoin community and may ultimately spell its demise.

bitcoin-wall4

Bitcoin, for those who haven’t been following it, has recently faced a dilemma. The payment network is currently limited by the size of each “block”, basically the size of the entry in the decentralized ledger, which puts an upper limit on the number of transactions that can be processed per second. The theoretical upper limit was approximately 7 per second however further development on the blockchain meant that the upper limit was less than half that. Whilst that still sounds like a lot of transactions (~600,000/day) it’s a far cry from what regular payment institutions do. This limitation needs to be addressed as the Bitcoin network already experiences severe delays in confirming transactions and it won’t get any better as time goes on.

Some of the core Bitcoin developers proposed an extension to the core Bitcoin framework called Bitcoin XT. The fork of the original client increased the block size to 8MB and proposed to double the size every 2 years up to 10 times, making the final block size somewhere around 8GB. This would’ve helped Bitcoin overcome some of the fundamental issues it is currently facing but it wasn’t met with universal approval. The developers decided to leave it up to the community to decide as the Bitcoin XT client was still compatible with the current network. The community would vote with its hashing power and the change could happen without much further interaction.

However the idea of a split between the core developers sent ripples through the community. This has since culminated in one of the lead developers leaving the project, declaring that it has failed.

His resignation sparked a quick downturn in the Bitcoin market, seeing the price shed about 20% of its price immediately. Whilst this isn’t the death knell of Bitcoin (since it soon regained some of the lost ground) it does show why the Bitcoin XT idea was so divisive. Bitcoin, whilst structured in a decentralised manner, has become anything but that with the development of large mining pools which control the lion’s share of the Bitcoin processing market. The resistance to change has largely come from them and those with a monetary interest in Bitcoin remaining the way it is: under their control. Whilst many will still uphold it as a currency of the people the unfortunate fact is that Bitcoin is far from that now, and is in need of change.

It is here where Bitcoin finds itself at a crossroads. There’s no doubt that it will soon run up hard against its own limitations and change will have to come eventually. The question is what kind of change and whether or not it will be to the benefit of all or just the few. The core tenants which first endeared me to cryptocurrencies still hold true within Bitcoin however its current implementation and those who control its ultimate destiny seem to be at odds with them. Suffice to say Bitcoin’s future is looking just as tumultuous as its past and that’s never been one of its admirable qualities.

Hard Numbers Show MTM NBN Cost Blowouts, Delays.

In my mind there’s never been any doubt that the MTM NBN has been anything more than a complete debacle. At a technological level it is an inferior idea, one that fails to provide the base infrastructure needed to support Australia’s future requirements. As time went by the claims of “Fast, Affordable, Sooner” fell one by one, leaving us with a tangled mess that in all likelihood was going to cost us just as much as the FTTP solution would. Now there’s some hard evidence, taken directly from nbn’s own reports and other authoritative sources, which confirms this is the case and it’s caused quite a stir from the current communications minister.

mitch fitfield

The evidence, released by the Labor party last week in a report to the media, includes the following table and points to numerous sources for their data:

Labor NBN Cost Blowout Delay Table

The numbers are essentially a summary of much of the information that has already been reported on previously however it’s pertinent to see it all collated together in a single table. The first row showing that the initial cost estimates were far out of line with reality has been reported on extensively, even before the solution was fully costed out in the atrocious CBA. There was no way that the initial claims of the FTTN install costs would hold, especially with the amount of cabinets required to ensure that the speeds met their required minimums. Similarly assuming that Telstra’s copper network was only in need of partial remediation was pure fantasy as the aging network has been in sore need of an overhaul for decades. The rest of the figures simply boil down to the absolutely bullshit forecasting that the Liberal’s did in order to make their plan look better on paper.

This release has prompted a rather heated response from the current communications minister Mitch Fifield, one which strangely ignores all the points that the Labor release raised. Since the figures have been mostly taken directly from nbn’s reports it would be hard for him to refute them however he made no attempt to reframe the conversation to other more flattering metrics. Instead it focuses on a whole slew of other incidents surrounding the NBN, none of which are relevant to the above facts. With that in mind you can only conclude that he knows those facts are true and there’s no point in fighting them.

It’s completely clear that the Liberal’s have failed on their promises with the NBN and this song and dance they continually engage in to make it seem otherwise simply doesn’t work. The current state of the NBN is entirely on their hands now and any attempt to blame the previous government is a farce, carefully crafted to distract us away from the real issues at hand here. They could have simply replaced the nbn’s board and continued on as normal, claiming all the credit for the project. Instead they tried to make it their own and failed, dooming Australia to be an Internet backwater for decades to come.

Take your licks Fifield and admit that your government fucked up. Then I’ll grant you some leniency.

Adobe Animate CC: Flash’s Death Knell.

Flash has been dying a long, slow and painful death. Ever since it developed a reputation for being a battery killer, something which is abhorrent in today’s mobile centric world, consumers have begun avoiding it at every opportunity they can. This has been aided tremendously by the adoption of web standards by numerous large companies, ensuring that no one is beholden to Adobe’s proprietary software. However the incredibly large investment in Flash simply wouldn’t disappear overnight, especially since Adobe’s tooling around it is still one of its biggest money makers. It seems Adobe is ready to start digging Flash’s grave however with the announcement that Flash Professional will become Adobe Animate CC.

Adobe Animate CC

The change honestly shouldn’t come as much of a surprise as the writing has been on the wall for sometime. Adobe first flirted with the idea of Flash being a HTML5 tool way back in 2011 with their Wallaby framework and had continued to develop it as time went on. Of course it was still primarily a Flash development tool, and the majority of people using it are still developing Flash applications, however it was clear that the market wanted to move away from Flash and onto standards based alternatives. That being said the rebranding of the product away from being a Flash tool signals that Adobe is ready to let it start to fade in the background and let the standards based web take over.

Interestingly the change is likely not revenue driven as the total income that Adobe derives from it directly is around 6% or so. More it would seem to be about bolstering their authoring tools as the standard for all rich web content, broadening the potential user base for the Animate CC application. From that perspective there’s some potential for the rebranding to work, especially since standards based development is now one of their key marketing plays. Whether that will be enough to pull people away from the alternatives that cropped up in the interim though is less clear but Adobe does have a good reputation when it comes to making creative tools.

Flash will likely still hang around in the background for quite some time now though as much of the infrastructure that’s built up around that ecosystem is still lumbering to change. A good example of this, YouTube, dumped Flash as default for Chrome some time ago but that still left around 80% of their visitors defaulting to Flash. Similarly other sites still rely on Flash for ads and other rich content with standards based solutions really only being the norm for newly developed websites and products. How long Flash will hang around is an open ended question but I don’t see it disappearing within the next few years.

We’re rapidly approaching a post-Flash world and we will all be much better because of it. Flash is a relic of a different time on the Internet, one where proprietary standards were the norm and everyone was battling for platform dominance. Adobe is now shifting to the larger market of being the tool of choice for content creators on a standards based web, a battle they’re much more likely to win than fighting to keep Flash alive. I, like many others, won’t be sad to see Flash go as the time has come for it to make way for the new blood of the Internet.

Li-Fi: 100 Times Faster, 100 Times Less Useful.

There are certain fundamental limitations when it comes to current wireless communications. Mostly it comes down to the bandwidth of the frequencies used as more devices come online the more congested they become. Simply changing frequencies isn’t enough to solve the problem however, especially when it comes to technology that’s as ubiquitous as wifi. This is what has driven many to look for alternative technologies, some looking to make the interference work for us whilst others are looking at doing away with radio frequencies entirely. Li-Fi is a proposed technology that uses light instead of RF to transmit data and, whilst it posits speeds up to 100 times faster than conventional wifi, I doubt it will ever become the wireless communication technology of choice.

1200x674xtimthumb.jpg.pagespeed.ic.Y6vFHn4CFR

Li-Fi utilizes standard light bulbs that are switched on and off in nanoseconds, too fast for the human eye to perceive any change in the output of the light. Whilst the lights need to remain in an on state in order to transmit data they are apparently able to still transmit when the light level is below that which the human eye can perceive. A direct line of sight isn’t required for the technology to work either as light reflected off walls was still able to produce a usable, albeit significantly reduced, data signal. The first commercial products were demonstrated sometime last year so the technology isn’t just a nice theory.

However such technology is severely limited by numerous factors. The biggest limitation is the fact that it can’t work without near or direct line of sight between the sender and receiver which means that a transmitter is required in every discrete room that you want to use your receiver in. This also means that whatever is feeding data into those transmitters, like say a cabled connection, also need to be present. Compared to a wifi endpoint, which usually just needs to be placed in a central location to work, this is a rather heavy requirement to satisfy.

Worse still this technology cannot work outside due to sunlight overpowering the signal. This likely also means that any indoor implementation would suffer greatly if there was sunlight entering the room. Thus the idea that Li-Fi would be 100 times faster than conventional wifi is likely just laboratory numbers and not representative of the real world performance.

The primary driver for technologies like these is convenience, something which Li-Fi simply can’t provide given its current limitations. Setting up a Li-Fi system won’t be as easy as screwing in a few new light bulbs, it will likely require some heavy investment in either cabling infrastructure or ethernet-over-power systems to support them. Compare this to any wifi endpoint which just needs one data connection to cover a large area (which can be set up in minutes) and I’m not sure customers will care how fast Li-Fi can be, especially if they also have to buy a new smartphone to use it.

I’m sure there will be some niche applications of this technology but past that I can’t really see it catching on. Faster speeds are always great but they’re all for naught if the limitations on their use are as severe as they are with Li-Fi. Realistically you can get pretty much the same effect with a wired connection and even then the most limiting factor is likely your Internet connection, not your interconnect. Of course I’m always open to being proved wrong on this but honestly I can’t see it happening.

Amazon Teases Prime Air, Again.

The last time I wrote about Amazon Prime Air was almost 2 years ago to the day and back then it seemed to be little more than a flight of fancy. Back then drones, whilst still being somewhat commonplace, were still something of an emerging space especially when it came to regulations and companies making use of them. Indeed the idea instantly ran afoul of the FAA, something which Amazon was surprisingly blase about at the time. Still there had been musings of them continuing development of the program and today they’ve shown off another prototype drone that they might use in the future.

prime-air_03

The drone is an interesting beast, capable of both VTOL and regular flight. This was most likely done to increase the effective range of the craft as traditional flight is a lot less energy intensive than 100% VTOL flight. The new prototype drone has a stated range of 16 miles (about 25KM) which you’d probably have to cut in half for the return trip. Whilst that’s likely an order of magnitude above the previous prototype they showcased 2 years ago it still means that a serviced based on them will either be very limited or Amazon is planning a massive shakeup of its distribution network.

Of course the timing of this announcement (and the accompanying video below) mere hours before the yearly Cyber Monday sale starts in earnest can’t be denied. Amazon Prime Air is undeniably a marketing tactic, one that’s worked well enough in the past to warrant them trying it again in order to boost sales on this day. On the flip side Amazon does seem pretty committed to the idea, with their various proposals for airspace usage and “dozens of prototypes” in the works, however until they start offering the service to real customers it’s going to be easy to remain skeptical.

Last time I wrote about Amazon Prime Air one of my local readers mentioned that a similar service was looking to take off here in Australia. The offering was going to be a joint effort between Flirtey, a delivery drone developer, and Zookal a local text book sale and rent service. They were targeting mid last year for their first delivery by drone however that never came to pass. Indeed an article earlier this year was all I could dredge up on the service where they still have yet to use the service commercially. To their credit Flirtey did make the first drone delivery in the USA in July this year so the technology is there it just needs to be put to use.

Whether or not something like this will see widespread adoption however is something I’m still not sure on. Right now the centralized distribution models that most companies employ simply don’t work with the incredibly limited range that most drones have. Even if the range issue could be solved I’m still not sure if it would be economical to use them, unless the delivery fees were substantially higher (and then how many customers would pay for that?). Don’t get me wrong, I still think it’d be incredibly cool to get something delivered by drone, but at this point I’m still not 100% sold on the idea that it can be done economically.