Technology

First They Came for Chess, Now They Come for Go.

Computers are better than humans at a lot of things but there are numerous problem spaces where they struggle. Anything with complex branching or large numbers of possibilities forces them into costly jumps, negating the benefits of their ability to think in microsecond increments. This is why it took computers so long from beating humans at something like tic-tac-toe, a computationally simple game, to beating humans at chess. However one game has proven elusive to even the most cutting edge AI developers, the seemingly simple game Go. This is because unlike chess or other games, which often rely on brute forcing out many possible moves and calculating the best one, Go has an incomprehensibly large number of possible moves making such an approach near impossible. However Google’s DeepMind AI, using their AlphaGo algorithms, has successfully defeated the top European player and will soon face its toughest challenge yet.

Unlike previous game playing AIs, which often relied on calculating board scores of potential moves, AlphaGo is a neural network that’s undergone whats called supervised learning. Essentially they’ve taken professional level Go games and fed their moves into a neural network. Then it’s told which outcomes lead to success and which ones don’t, allowing the neural network to develop it’s own pattern recognition for winning moves. This isn’t what let them beat a top Go player however as supervised learning is a well established principle in the development of neural networks. Their secret sauce appears to be a combination of an algorithm called Monte Carlo Tree Search (MCTS) and the fact that they pitted the AI against itself in order for it to get better.

MCTS is a very interesting idea, one that’s broadly applicable to games with a finite set of moves or those with set limits on play. Essentially what a MCTS will do is select moves at random and play them out until they’re finished. Then, when the outcome of that play out is determined, the moves made are then used to adjust the weightings of how successful those potential moves were. This, in essence, allows you to determine what set of moves are most optimal by refining down the problem space to what is the most ideal set. Of course the tradeoff here is between how long and deep you want the network to search and how long you have to decide to make a move.

This is where the millions of games that AlphaGo played against itself comes into play as it allowed the both the neural networks and the MCTS algorithm to be greatly refined. In their single machine tests it only lost to other Go programs once out of almost 500 games. In the match played against Fan Hui however he was matched against a veritable army of hardware, some 170 GPUs and 1200 CPUs. That should give you some indication of just how complex Go is and what it’s taken to get to this point.

AlphaGo’s biggest challenge is ahead of it though as it prepares to face down the current top Go player of the last decade, Lee Sedol. In terms of opponents Lee is an order of magnitude higher being a 9th Dan to Fan’s 2nd Dan. How they structure the matches and their infrastructure to support AlphaGo will be incredibly interesting but whether or not it will come out victorious is anyone’s guess.

bitcoin-wall4

Bitcoin at a Crossroads.

Despite what others seem to think I’ve always liked the idea behind cryptocurrencies. A decentralized method of transferring wealth between parties, free from the influence of outside parties, has an enormous amount of value as a service. Bitcoin was the first incarnation of this idea to actually work, creating the ideas that power the proof-of-work system and the decentralized nature that was critical to its success. However the Bitcoin community and I soon parted ways as my writings on its use as a speculative investment vehicle rubbed numerous people the wrong way. It seems that the tenancy to run against the groupthink runs all the way to the top of the Bitcoin community and may ultimately spell its demise.

bitcoin-wall4

Bitcoin, for those who haven’t been following it, has recently faced a dilemma. The payment network is currently limited by the size of each “block”, basically the size of the entry in the decentralized ledger, which puts an upper limit on the number of transactions that can be processed per second. The theoretical upper limit was approximately 7 per second however further development on the blockchain meant that the upper limit was less than half that. Whilst that still sounds like a lot of transactions (~600,000/day) it’s a far cry from what regular payment institutions do. This limitation needs to be addressed as the Bitcoin network already experiences severe delays in confirming transactions and it won’t get any better as time goes on.

Some of the core Bitcoin developers proposed an extension to the core Bitcoin framework called Bitcoin XT. The fork of the original client increased the block size to 8MB and proposed to double the size every 2 years up to 10 times, making the final block size somewhere around 8GB. This would’ve helped Bitcoin overcome some of the fundamental issues it is currently facing but it wasn’t met with universal approval. The developers decided to leave it up to the community to decide as the Bitcoin XT client was still compatible with the current network. The community would vote with its hashing power and the change could happen without much further interaction.

However the idea of a split between the core developers sent ripples through the community. This has since culminated in one of the lead developers leaving the project, declaring that it has failed.

His resignation sparked a quick downturn in the Bitcoin market, seeing the price shed about 20% of its price immediately. Whilst this isn’t the death knell of Bitcoin (since it soon regained some of the lost ground) it does show why the Bitcoin XT idea was so divisive. Bitcoin, whilst structured in a decentralised manner, has become anything but that with the development of large mining pools which control the lion’s share of the Bitcoin processing market. The resistance to change has largely come from them and those with a monetary interest in Bitcoin remaining the way it is: under their control. Whilst many will still uphold it as a currency of the people the unfortunate fact is that Bitcoin is far from that now, and is in need of change.

It is here where Bitcoin finds itself at a crossroads. There’s no doubt that it will soon run up hard against its own limitations and change will have to come eventually. The question is what kind of change and whether or not it will be to the benefit of all or just the few. The core tenants which first endeared me to cryptocurrencies still hold true within Bitcoin however its current implementation and those who control its ultimate destiny seem to be at odds with them. Suffice to say Bitcoin’s future is looking just as tumultuous as its past and that’s never been one of its admirable qualities.

mitch fitfield

Hard Numbers Show MTM NBN Cost Blowouts, Delays.

In my mind there’s never been any doubt that the MTM NBN has been anything more than a complete debacle. At a technological level it is an inferior idea, one that fails to provide the base infrastructure needed to support Australia’s future requirements. As time went by the claims of “Fast, Affordable, Sooner” fell one by one, leaving us with a tangled mess that in all likelihood was going to cost us just as much as the FTTP solution would. Now there’s some hard evidence, taken directly from nbn’s own reports and other authoritative sources, which confirms this is the case and it’s caused quite a stir from the current communications minister.

mitch fitfield

The evidence, released by the Labor party last week in a report to the media, includes the following table and points to numerous sources for their data:

Labor NBN Cost Blowout Delay Table

The numbers are essentially a summary of much of the information that has already been reported on previously however it’s pertinent to see it all collated together in a single table. The first row showing that the initial cost estimates were far out of line with reality has been reported on extensively, even before the solution was fully costed out in the atrocious CBA. There was no way that the initial claims of the FTTN install costs would hold, especially with the amount of cabinets required to ensure that the speeds met their required minimums. Similarly assuming that Telstra’s copper network was only in need of partial remediation was pure fantasy as the aging network has been in sore need of an overhaul for decades. The rest of the figures simply boil down to the absolutely bullshit forecasting that the Liberal’s did in order to make their plan look better on paper.

This release has prompted a rather heated response from the current communications minister Mitch Fifield, one which strangely ignores all the points that the Labor release raised. Since the figures have been mostly taken directly from nbn’s reports it would be hard for him to refute them however he made no attempt to reframe the conversation to other more flattering metrics. Instead it focuses on a whole slew of other incidents surrounding the NBN, none of which are relevant to the above facts. With that in mind you can only conclude that he knows those facts are true and there’s no point in fighting them.

It’s completely clear that the Liberal’s have failed on their promises with the NBN and this song and dance they continually engage in to make it seem otherwise simply doesn’t work. The current state of the NBN is entirely on their hands now and any attempt to blame the previous government is a farce, carefully crafted to distract us away from the real issues at hand here. They could have simply replaced the nbn’s board and continued on as normal, claiming all the credit for the project. Instead they tried to make it their own and failed, dooming Australia to be an Internet backwater for decades to come.

Take your licks Fifield and admit that your government fucked up. Then I’ll grant you some leniency.

Adobe Animate CC

Adobe Animate CC: Flash’s Death Knell.

Flash has been dying a long, slow and painful death. Ever since it developed a reputation for being a battery killer, something which is abhorrent in today’s mobile centric world, consumers have begun avoiding it at every opportunity they can. This has been aided tremendously by the adoption of web standards by numerous large companies, ensuring that no one is beholden to Adobe’s proprietary software. However the incredibly large investment in Flash simply wouldn’t disappear overnight, especially since Adobe’s tooling around it is still one of its biggest money makers. It seems Adobe is ready to start digging Flash’s grave however with the announcement that Flash Professional will become Adobe Animate CC.

Adobe Animate CC

The change honestly shouldn’t come as much of a surprise as the writing has been on the wall for sometime. Adobe first flirted with the idea of Flash being a HTML5 tool way back in 2011 with their Wallaby framework and had continued to develop it as time went on. Of course it was still primarily a Flash development tool, and the majority of people using it are still developing Flash applications, however it was clear that the market wanted to move away from Flash and onto standards based alternatives. That being said the rebranding of the product away from being a Flash tool signals that Adobe is ready to let it start to fade in the background and let the standards based web take over.

Interestingly the change is likely not revenue driven as the total income that Adobe derives from it directly is around 6% or so. More it would seem to be about bolstering their authoring tools as the standard for all rich web content, broadening the potential user base for the Animate CC application. From that perspective there’s some potential for the rebranding to work, especially since standards based development is now one of their key marketing plays. Whether that will be enough to pull people away from the alternatives that cropped up in the interim though is less clear but Adobe does have a good reputation when it comes to making creative tools.

Flash will likely still hang around in the background for quite some time now though as much of the infrastructure that’s built up around that ecosystem is still lumbering to change. A good example of this, YouTube, dumped Flash as default for Chrome some time ago but that still left around 80% of their visitors defaulting to Flash. Similarly other sites still rely on Flash for ads and other rich content with standards based solutions really only being the norm for newly developed websites and products. How long Flash will hang around is an open ended question but I don’t see it disappearing within the next few years.

We’re rapidly approaching a post-Flash world and we will all be much better because of it. Flash is a relic of a different time on the Internet, one where proprietary standards were the norm and everyone was battling for platform dominance. Adobe is now shifting to the larger market of being the tool of choice for content creators on a standards based web, a battle they’re much more likely to win than fighting to keep Flash alive. I, like many others, won’t be sad to see Flash go as the time has come for it to make way for the new blood of the Internet.

1200x674xtimthumb.jpg.pagespeed.ic.Y6vFHn4CFR

Li-Fi: 100 Times Faster, 100 Times Less Useful.

There are certain fundamental limitations when it comes to current wireless communications. Mostly it comes down to the bandwidth of the frequencies used as more devices come online the more congested they become. Simply changing frequencies isn’t enough to solve the problem however, especially when it comes to technology that’s as ubiquitous as wifi. This is what has driven many to look for alternative technologies, some looking to make the interference work for us whilst others are looking at doing away with radio frequencies entirely. Li-Fi is a proposed technology that uses light instead of RF to transmit data and, whilst it posits speeds up to 100 times faster than conventional wifi, I doubt it will ever become the wireless communication technology of choice.

1200x674xtimthumb.jpg.pagespeed.ic.Y6vFHn4CFR

Li-Fi utilizes standard light bulbs that are switched on and off in nanoseconds, too fast for the human eye to perceive any change in the output of the light. Whilst the lights need to remain in an on state in order to transmit data they are apparently able to still transmit when the light level is below that which the human eye can perceive. A direct line of sight isn’t required for the technology to work either as light reflected off walls was still able to produce a usable, albeit significantly reduced, data signal. The first commercial products were demonstrated sometime last year so the technology isn’t just a nice theory.

However such technology is severely limited by numerous factors. The biggest limitation is the fact that it can’t work without near or direct line of sight between the sender and receiver which means that a transmitter is required in every discrete room that you want to use your receiver in. This also means that whatever is feeding data into those transmitters, like say a cabled connection, also need to be present. Compared to a wifi endpoint, which usually just needs to be placed in a central location to work, this is a rather heavy requirement to satisfy.

Worse still this technology cannot work outside due to sunlight overpowering the signal. This likely also means that any indoor implementation would suffer greatly if there was sunlight entering the room. Thus the idea that Li-Fi would be 100 times faster than conventional wifi is likely just laboratory numbers and not representative of the real world performance.

The primary driver for technologies like these is convenience, something which Li-Fi simply can’t provide given its current limitations. Setting up a Li-Fi system won’t be as easy as screwing in a few new light bulbs, it will likely require some heavy investment in either cabling infrastructure or ethernet-over-power systems to support them. Compare this to any wifi endpoint which just needs one data connection to cover a large area (which can be set up in minutes) and I’m not sure customers will care how fast Li-Fi can be, especially if they also have to buy a new smartphone to use it.

I’m sure there will be some niche applications of this technology but past that I can’t really see it catching on. Faster speeds are always great but they’re all for naught if the limitations on their use are as severe as they are with Li-Fi. Realistically you can get pretty much the same effect with a wired connection and even then the most limiting factor is likely your Internet connection, not your interconnect. Of course I’m always open to being proved wrong on this but honestly I can’t see it happening.

prime-air_03

Amazon Teases Prime Air, Again.

The last time I wrote about Amazon Prime Air was almost 2 years ago to the day and back then it seemed to be little more than a flight of fancy. Back then drones, whilst still being somewhat commonplace, were still something of an emerging space especially when it came to regulations and companies making use of them. Indeed the idea instantly ran afoul of the FAA, something which Amazon was surprisingly blase about at the time. Still there had been musings of them continuing development of the program and today they’ve shown off another prototype drone that they might use in the future.

prime-air_03

The drone is an interesting beast, capable of both VTOL and regular flight. This was most likely done to increase the effective range of the craft as traditional flight is a lot less energy intensive than 100% VTOL flight. The new prototype drone has a stated range of 16 miles (about 25KM) which you’d probably have to cut in half for the return trip. Whilst that’s likely an order of magnitude above the previous prototype they showcased 2 years ago it still means that a serviced based on them will either be very limited or Amazon is planning a massive shakeup of its distribution network.

Of course the timing of this announcement (and the accompanying video below) mere hours before the yearly Cyber Monday sale starts in earnest can’t be denied. Amazon Prime Air is undeniably a marketing tactic, one that’s worked well enough in the past to warrant them trying it again in order to boost sales on this day. On the flip side Amazon does seem pretty committed to the idea, with their various proposals for airspace usage and “dozens of prototypes” in the works, however until they start offering the service to real customers it’s going to be easy to remain skeptical.

Last time I wrote about Amazon Prime Air one of my local readers mentioned that a similar service was looking to take off here in Australia. The offering was going to be a joint effort between Flirtey, a delivery drone developer, and Zookal a local text book sale and rent service. They were targeting mid last year for their first delivery by drone however that never came to pass. Indeed an article earlier this year was all I could dredge up on the service where they still have yet to use the service commercially. To their credit Flirtey did make the first drone delivery in the USA in July this year so the technology is there it just needs to be put to use.

Whether or not something like this will see widespread adoption however is something I’m still not sure on. Right now the centralized distribution models that most companies employ simply don’t work with the incredibly limited range that most drones have. Even if the range issue could be solved I’m still not sure if it would be economical to use them, unless the delivery fees were substantially higher (and then how many customers would pay for that?). Don’t get me wrong, I still think it’d be incredibly cool to get something delivered by drone, but at this point I’m still not 100% sold on the idea that it can be done economically.

Leaked NBN Report Shows HFC Woes.

There’s little doubt now that the Multi-Technology Mix was a viable path forward for the NBN. The tenants of faster, cheaper and sooner have all fallen by the wayside in one way or another. The speed guarantees were dropped very quickly as NBNCo (now known as just nbn™) came face to face with the reality that the copper network simply couldn’t support them. The cost of their solution has come into question numerous times and has shown to be completely incorrect. Worst still the subsequent cost blowouts are almost wholly attributed to the changes made by the MTM switch, not the original FTTP solution. Lastly with the delays that the FTTN trials have experienced along with the disruption to provisioning activities that were already under way there is no chance that we’ll have it sooner. Worse still it appears that the HFC network, the backbone upon which Turnbull built his MTM idea, isn’t up to the task of providing NBN services.

The leaked report shows that, in its current state, the Optus HFC network simply doesn’t have the capacity nor is it up to the standards required to service NBN customers. Chief among the numerous issues listed in the presentation is the fact that the Optus cable network is heavily oversubscribed and would require additional backhaul and nodes to support new customers. Among the other issues listed are pieces of equipment that are in need of replacement, the presence of ingress noise reducing user speeds and the complexity of the established HFC network’s multipathing infrastructure. All said the cost of remediating this network (or “overbuilding” it as they are saying) ranges from $150 million up to $800 million in addition to the capital already spent to acquire the network.

Some of the options presented to fix this solution are frankly comical, like the idea that nbn should engage Telstra to extend their HFC network to cover the areas currently serviced by Optus. Further options peg FTTP as the most expensive with FTTdp (fiber to the distribution point) and FTTN coming in as the cheaper alternatives. The last one is some horrendous mix of FTTdp and Telstra HFC which would just lead to confusion for consumers, what with 2 NBN offerings in the same suburb that had wildly different services and speeds available on them. Put simply Optus’ HFC network being in the state it is has no good solution other than the one that the original NBN plan had in mind.

The ubiquitous fiber approach that the original NBN sought to implement avoided all the issues that the MTM solution is now encountering for the simple fact that we can’t trust the current state of any of the networks deployed in Australia. It has been known for a long time that the copper network is aging and in dire need of replacement, unable to reliably provide the speeds that many consumers now demand. The HFC network has always been riddled with issues with nearly every metro deployment suffering from major congestion issues from the day it was implemented. Relying on both these things to deliver broadband services was doomed to fail and it’s not surprising that that’s exactly what we’ve seen ever since the MTM solution was announced.

Frankly this kind of news no longer surprises me. I had hoped that the Liberals would have just taken credit for the original idea that Labor put forward but they went one step further and trashed the whole thing. A full FTTP solution would have catapulted Australia to the forefront of the global digital economy, providing benefits far in excess of its cost. Now however we’re likely decades away from achieving that, all thanks to the short sightedness of a potential one term government. There really is little to hope for when it comes to the future of the NBN and there’s no question in my mind of who is to blame.

ipad pro

Tim Cook Says Macs, iPads Won’t Converge.

Long time readers will know that I’ve long held the belief that OSX and iOS were bound to merge at some point in the future. For me the reasons for thinking this are wide and varied, but it is most easily seen in ever vanishing delineation between the two hardware lines that support them. The iPad Pro was the last volley that iOS launched against its OSX brethren and, for me, was the concrete proof that Apple was looking to merge the two product lines once and for all. Some recent off-hand remarks from CEO Tim Cook convinced many of my line of thinking, enough so that Tim Cook has come out saying that Apple won’t be developing a converged Mac/iPad device.

ipad pro

That statement probably shouldn’t come as much of surprise given that Cook called the Surface Book “deluded” just under a week ago. Whilst I can understand that it’s every CEO’s right to have a dig at the competition the commentary from Cook does seem a little naive in this regard. The Surface has shown that there’s a market for a tablet-first laptop hybrid and there’s every reason to expect a laptop first tablet hybrid will meet similar success. Indeed the initial reactions to the Surface Book are overwhelmingly positive so Cook might want to reconsider the rhetoric he’s using on this, especially if they ever start eyeing off creating a competing device like they did with the iPad Pro.

The response about non-convergence though is an interesting one. Indeed, as Windows 8 showed, spanning a platform between all types of devices can lead to a whole raft of compromises that leaves nobody happy. However Microsoft has shown that it can be done right with Windows 10 and the Surface Book is their chief demonstrator of how a converged system can work. By distancing himself from the idea that the platforms will never meet in the middle, apart from the handful of integration services that work across both platforms, Cook limits the potential synergy that can be gained from such integration.

At the same time I get the feeling that the response might have be born out of the concern he stirred up with his previous comment about not needing a PC any more. He later clarified that as not needing a PC that’s not a Mac since they are apparently not Personal Computers. For fans of the Mac platform this felt like a clear signal that Apple feels PCs are an also ran, something that they keep going in order to endear brand loyalty more than anything else. When you look at the size of the entire Mac business compared to the rest of Apple it certainly looks that way with it making less than 10% of the company’s earnings. For those who use OSX as their platform for creation the consternation about it going away is a real concern.

As you can probably tell I don’t entirely believe Tim Cook’s comments on this matter. Whilst no company would want to take an axe to a solid revenue stream like the Mac platform the constant blurring of the lines between the OSX and iOS based product lines makes the future for them seem inevitable. It might not come as a big bang with the two wed in an unholy codebase marriage but over time I feel the lines between what differentiates either product line will be so blurred as to be meaningless. Indeed if the success of Microsoft’s Surface line is anything to go by Apple may have their hand forced in this regard, something that few would have ever expected to see happen to a market leader like Apple.

IMG_20151111_095854

Jawbone Up3: Good, But Still Missing Something.

I was always of the opinion that the health trackers on the market were little more than gimmicks. Most of them were glorified pedometers worn by people who wanted to look like they were fitness conscious people rather than actually using them to stay fit. The introduction of heart rate tracking however presented functionality that wasn’t available before and piqued my interest. However the lack of continuous passive heart rate monitoring meant that they weren’t particularly useful in that regard so I held off until that was available. The Jawbone Up3 was the first to offer that functionality and, whilst it’s still limited to non-active periods, was enough for me to purchase my first fitness tracker. After using it for a month or so I thought I’d report my findings on it as most of the reviews out there focus on it at launch, rather than how it is now.

IMG_20151111_095854

The device itself is small, lightweight and relatively easy to forget that it’s strapped to your wrist once you get it on. The band adjustment system is a little awkward, requiring you to take it off to adjust it and then put it back on, but once you get it to the right size it’s not much of an issue. The charging mechanism could be done better as it requires you to line up all the contacts perfectly or the band will simply not charge. It’d be far better to have an inductive charging system for it however given the device’s size and weight I’d hazard a guess that that was likely not an option. For the fashion conscious the Up3 seems to go unnoticed by most with only a few people I knew noticing it over the time I’ve had it. Overall as a piece of tech I like it however looks aren’t everything when it comes to fitness trackers.

The spec sheet for the Up3 has a laundry list of sensors in it however you really only get to see the data collected from two of them: the pedometer and the heart rate monitor. Whilst I understand that having all that data would be confusing for most users for someone like me it’d definitely be of interest. This means that, whilst the Up3 might be the most feature packed fitness tracker out there, in terms of actual, usable functionality it’s quite similar to a lot of bands already out there. For many that will make the rather high asking price a hard pill to swallow. There’s been promises of access to more data through the API for some time now but so far they have gone unfulfilled.

Jawbone Up3 App

What the Up3 really has going for it though is the app which is well designed and highly functional. Setting everything up took about 5 minutes and it instantly began tracking everything. The SmartCoach feature is interesting as it skirts around providing direct health advice but tries to encourage certain, well established healthy behaviours. All the functions work as expected with my favourite being the sleep alarm. Whilst it took a little tweaking to get right (it seemed to just go off at the time I set for the most part initially) once it’s done I definitely felt more awake when it buzzed me. It’s not a panacea to all your sleep woes though but it did give me insight into what behaviours might have been affecting my sleep patterns and what I could do to fix them.

The heart rate tracking seems relatively accurate from a trend point of view. I could definitely tell when I was exercising, sitting down or in a particularly heated meeting where my heart was racing. It’s definitely not 100% accurate as there were numerous spikes, dips and gaps in the readings which often meant that the daily average was not entirely reliable. Again it was more interesting to see the trending over time and linking deviations to certain behaviours. If accuracy is the name of the game however the Up3 is probably not for you as it simply can’t be used for more than averaging.

What’s really missing from the Up3 and it’s associated app is the integration and distillation of all the data it’s able to capture. Many have looked to heart rate monitoring as a way to get more accurate calorie burn rates but the Up3 only uses the pedometer input to do this. The various other sensor inputs could also prove valuable in determining passive calorie burn rate (I, for instance, tend to run “hotter” than most people, something the skin temperature sensor can pick up on) but again their data is unused. On a pure specification level the Up3 is the most advanced tracker out there but that means nothing if that technology isn’t put to good use.

Would I recommend buying one? I’m torn honestly. On the one hand it does do the basic functions very well and the app looks a lot better than anything the competition has put out so far. However you’re paying a lot for technology that you’re simply not going to use, hoping that it will become available sometime in the future. Unless the optical heartrate tracking of other fitness trackers isn’t cutting it for you then it’s hard to recommend the Up3 above them and other, simpler trackers will provide much of the same benefit for a lower price. Overall the Up3 has the potential to be something great, but paying for potential, rather than actual functionality, is something that only early adopters do. That was an easier sell 6 months ago but with only one major update since then I don’t think many are willing to buy something on spec.

Lytro Immerge

Lytro Immerge: True 3D Video.

You’ve likely seen examples of 360º video on YouTube before, those curious little things that allow you to look around the scene as it plays out. Most of these come courtesy of custom rigs that people have created to capture video from all angles, using software to stitch them all together. Others are simply CGI that’s been rendered in the appropriate way to give you the full 360º view. Whilst these are amazing demonstrations of the technology they all share the same fundamental limitation: you’re rooted to the camera. True 3D video, where you’re able to move freely about the scene, is not yet a reality but it will be soon thanks to Lytro’s new camera, the Immerge.

Lytro Immerge

That odd UFO looking device is the Immerge, containing hundreds of the lightfield sensors (the things that powered the original Lytro and the Illum) within each of its rings. There’s no change in the underlying technology, the lightfield sensors have the same intensity plus direction sensing capabilities, however these will be the first sensors in Lytro’s range to boast video capture. This, combined with the enormous array of sensors, allows the Immerge to capture all the details of a scene, including geometry and lighting. The resulting video, which needs to be captured and processed on a specially designed server that the camera needs, allows the viewer to move around the scene independently of the camera. Suffice to say that’s a big step up from the 360º video we’re used to seeing today and, I feel, is what 3D video should be.

The Immerge poses some rather interesting challenges however, both in terms of content production and its consumption. For starters it’s wildly different from any kind of professional camera currently available, one that doesn’t allow a crew to be anywhere near it whilst its filming (unless they want to be part of the scene). Lytro understands this and has made it remotely operable however that doesn’t detract from the fact that traditional filming techniques simply won’t work with the Immerge. Indeed this kind of camera demands a whole new way of thinking as you’re no longer in charge of where the viewer will be looking, nor where they’ll end up in a scene.

Similarly on the consumer end the Immerge relies on the burgeoning consumer VR industry in order to have an effective platform for it to really shine. This isn’t going to be a cinema style experience any time soon, the technology simply isn’t there, instead Immerge videos will likely be viewed by people at home on their Oculus Rifts or similar. There’s definitely a growing interest in this space by consumers, as I’ve detailed in the past, however for a device like the Immerge I’m not sure that’s enough. There’s potentially other possibilities that I’m not thinking of, like shooting on the Immerge and then editing everything down to a regular movie, which might make it more viable but i feel like that would be leaving so much of the Immerge’s potential at the door.

Despite all that though the Immerge does look like an impressive piece of kit and it will be able to do things that no other device is currently capable of doing. This pivot towards the professional video market could be the play that makes their struggle in the consumer market all worthwhile. We won’t have to wait long to see it either as Lytro has committed to the Immerge being publicly available in Q1 next year. Whether or not it resonates with the professional content creators and their consumers will be an interesting thing to see as the technology really does have a lot of promise.