Technology

Leaked NBN Report Shows HFC Woes.

There’s little doubt now that the Multi-Technology Mix was a viable path forward for the NBN. The tenants of faster, cheaper and sooner have all fallen by the wayside in one way or another. The speed guarantees were dropped very quickly as NBNCo (now known as just nbn™) came face to face with the reality that the copper network simply couldn’t support them. The cost of their solution has come into question numerous times and has shown to be completely incorrect. Worst still the subsequent cost blowouts are almost wholly attributed to the changes made by the MTM switch, not the original FTTP solution. Lastly with the delays that the FTTN trials have experienced along with the disruption to provisioning activities that were already under way there is no chance that we’ll have it sooner. Worse still it appears that the HFC network, the backbone upon which Turnbull built his MTM idea, isn’t up to the task of providing NBN services.

The leaked report shows that, in its current state, the Optus HFC network simply doesn’t have the capacity nor is it up to the standards required to service NBN customers. Chief among the numerous issues listed in the presentation is the fact that the Optus cable network is heavily oversubscribed and would require additional backhaul and nodes to support new customers. Among the other issues listed are pieces of equipment that are in need of replacement, the presence of ingress noise reducing user speeds and the complexity of the established HFC network’s multipathing infrastructure. All said the cost of remediating this network (or “overbuilding” it as they are saying) ranges from $150 million up to $800 million in addition to the capital already spent to acquire the network.

Some of the options presented to fix this solution are frankly comical, like the idea that nbn should engage Telstra to extend their HFC network to cover the areas currently serviced by Optus. Further options peg FTTP as the most expensive with FTTdp (fiber to the distribution point) and FTTN coming in as the cheaper alternatives. The last one is some horrendous mix of FTTdp and Telstra HFC which would just lead to confusion for consumers, what with 2 NBN offerings in the same suburb that had wildly different services and speeds available on them. Put simply Optus’ HFC network being in the state it is has no good solution other than the one that the original NBN plan had in mind.

The ubiquitous fiber approach that the original NBN sought to implement avoided all the issues that the MTM solution is now encountering for the simple fact that we can’t trust the current state of any of the networks deployed in Australia. It has been known for a long time that the copper network is aging and in dire need of replacement, unable to reliably provide the speeds that many consumers now demand. The HFC network has always been riddled with issues with nearly every metro deployment suffering from major congestion issues from the day it was implemented. Relying on both these things to deliver broadband services was doomed to fail and it’s not surprising that that’s exactly what we’ve seen ever since the MTM solution was announced.

Frankly this kind of news no longer surprises me. I had hoped that the Liberals would have just taken credit for the original idea that Labor put forward but they went one step further and trashed the whole thing. A full FTTP solution would have catapulted Australia to the forefront of the global digital economy, providing benefits far in excess of its cost. Now however we’re likely decades away from achieving that, all thanks to the short sightedness of a potential one term government. There really is little to hope for when it comes to the future of the NBN and there’s no question in my mind of who is to blame.

ipad pro

Tim Cook Says Macs, iPads Won’t Converge.

Long time readers will know that I’ve long held the belief that OSX and iOS were bound to merge at some point in the future. For me the reasons for thinking this are wide and varied, but it is most easily seen in ever vanishing delineation between the two hardware lines that support them. The iPad Pro was the last volley that iOS launched against its OSX brethren and, for me, was the concrete proof that Apple was looking to merge the two product lines once and for all. Some recent off-hand remarks from CEO Tim Cook convinced many of my line of thinking, enough so that Tim Cook has come out saying that Apple won’t be developing a converged Mac/iPad device.

ipad pro

That statement probably shouldn’t come as much of surprise given that Cook called the Surface Book “deluded” just under a week ago. Whilst I can understand that it’s every CEO’s right to have a dig at the competition the commentary from Cook does seem a little naive in this regard. The Surface has shown that there’s a market for a tablet-first laptop hybrid and there’s every reason to expect a laptop first tablet hybrid will meet similar success. Indeed the initial reactions to the Surface Book are overwhelmingly positive so Cook might want to reconsider the rhetoric he’s using on this, especially if they ever start eyeing off creating a competing device like they did with the iPad Pro.

The response about non-convergence though is an interesting one. Indeed, as Windows 8 showed, spanning a platform between all types of devices can lead to a whole raft of compromises that leaves nobody happy. However Microsoft has shown that it can be done right with Windows 10 and the Surface Book is their chief demonstrator of how a converged system can work. By distancing himself from the idea that the platforms will never meet in the middle, apart from the handful of integration services that work across both platforms, Cook limits the potential synergy that can be gained from such integration.

At the same time I get the feeling that the response might have be born out of the concern he stirred up with his previous comment about not needing a PC any more. He later clarified that as not needing a PC that’s not a Mac since they are apparently not Personal Computers. For fans of the Mac platform this felt like a clear signal that Apple feels PCs are an also ran, something that they keep going in order to endear brand loyalty more than anything else. When you look at the size of the entire Mac business compared to the rest of Apple it certainly looks that way with it making less than 10% of the company’s earnings. For those who use OSX as their platform for creation the consternation about it going away is a real concern.

As you can probably tell I don’t entirely believe Tim Cook’s comments on this matter. Whilst no company would want to take an axe to a solid revenue stream like the Mac platform the constant blurring of the lines between the OSX and iOS based product lines makes the future for them seem inevitable. It might not come as a big bang with the two wed in an unholy codebase marriage but over time I feel the lines between what differentiates either product line will be so blurred as to be meaningless. Indeed if the success of Microsoft’s Surface line is anything to go by Apple may have their hand forced in this regard, something that few would have ever expected to see happen to a market leader like Apple.

IMG_20151111_095854

Jawbone Up3: Good, But Still Missing Something.

I was always of the opinion that the health trackers on the market were little more than gimmicks. Most of them were glorified pedometers worn by people who wanted to look like they were fitness conscious people rather than actually using them to stay fit. The introduction of heart rate tracking however presented functionality that wasn’t available before and piqued my interest. However the lack of continuous passive heart rate monitoring meant that they weren’t particularly useful in that regard so I held off until that was available. The Jawbone Up3 was the first to offer that functionality and, whilst it’s still limited to non-active periods, was enough for me to purchase my first fitness tracker. After using it for a month or so I thought I’d report my findings on it as most of the reviews out there focus on it at launch, rather than how it is now.

IMG_20151111_095854

The device itself is small, lightweight and relatively easy to forget that it’s strapped to your wrist once you get it on. The band adjustment system is a little awkward, requiring you to take it off to adjust it and then put it back on, but once you get it to the right size it’s not much of an issue. The charging mechanism could be done better as it requires you to line up all the contacts perfectly or the band will simply not charge. It’d be far better to have an inductive charging system for it however given the device’s size and weight I’d hazard a guess that that was likely not an option. For the fashion conscious the Up3 seems to go unnoticed by most with only a few people I knew noticing it over the time I’ve had it. Overall as a piece of tech I like it however looks aren’t everything when it comes to fitness trackers.

The spec sheet for the Up3 has a laundry list of sensors in it however you really only get to see the data collected from two of them: the pedometer and the heart rate monitor. Whilst I understand that having all that data would be confusing for most users for someone like me it’d definitely be of interest. This means that, whilst the Up3 might be the most feature packed fitness tracker out there, in terms of actual, usable functionality it’s quite similar to a lot of bands already out there. For many that will make the rather high asking price a hard pill to swallow. There’s been promises of access to more data through the API for some time now but so far they have gone unfulfilled.

Jawbone Up3 App

What the Up3 really has going for it though is the app which is well designed and highly functional. Setting everything up took about 5 minutes and it instantly began tracking everything. The SmartCoach feature is interesting as it skirts around providing direct health advice but tries to encourage certain, well established healthy behaviours. All the functions work as expected with my favourite being the sleep alarm. Whilst it took a little tweaking to get right (it seemed to just go off at the time I set for the most part initially) once it’s done I definitely felt more awake when it buzzed me. It’s not a panacea to all your sleep woes though but it did give me insight into what behaviours might have been affecting my sleep patterns and what I could do to fix them.

The heart rate tracking seems relatively accurate from a trend point of view. I could definitely tell when I was exercising, sitting down or in a particularly heated meeting where my heart was racing. It’s definitely not 100% accurate as there were numerous spikes, dips and gaps in the readings which often meant that the daily average was not entirely reliable. Again it was more interesting to see the trending over time and linking deviations to certain behaviours. If accuracy is the name of the game however the Up3 is probably not for you as it simply can’t be used for more than averaging.

What’s really missing from the Up3 and it’s associated app is the integration and distillation of all the data it’s able to capture. Many have looked to heart rate monitoring as a way to get more accurate calorie burn rates but the Up3 only uses the pedometer input to do this. The various other sensor inputs could also prove valuable in determining passive calorie burn rate (I, for instance, tend to run “hotter” than most people, something the skin temperature sensor can pick up on) but again their data is unused. On a pure specification level the Up3 is the most advanced tracker out there but that means nothing if that technology isn’t put to good use.

Would I recommend buying one? I’m torn honestly. On the one hand it does do the basic functions very well and the app looks a lot better than anything the competition has put out so far. However you’re paying a lot for technology that you’re simply not going to use, hoping that it will become available sometime in the future. Unless the optical heartrate tracking of other fitness trackers isn’t cutting it for you then it’s hard to recommend the Up3 above them and other, simpler trackers will provide much of the same benefit for a lower price. Overall the Up3 has the potential to be something great, but paying for potential, rather than actual functionality, is something that only early adopters do. That was an easier sell 6 months ago but with only one major update since then I don’t think many are willing to buy something on spec.

Lytro Immerge

Lytro Immerge: True 3D Video.

You’ve likely seen examples of 360º video on YouTube before, those curious little things that allow you to look around the scene as it plays out. Most of these come courtesy of custom rigs that people have created to capture video from all angles, using software to stitch them all together. Others are simply CGI that’s been rendered in the appropriate way to give you the full 360º view. Whilst these are amazing demonstrations of the technology they all share the same fundamental limitation: you’re rooted to the camera. True 3D video, where you’re able to move freely about the scene, is not yet a reality but it will be soon thanks to Lytro’s new camera, the Immerge.

Lytro Immerge

That odd UFO looking device is the Immerge, containing hundreds of the lightfield sensors (the things that powered the original Lytro and the Illum) within each of its rings. There’s no change in the underlying technology, the lightfield sensors have the same intensity plus direction sensing capabilities, however these will be the first sensors in Lytro’s range to boast video capture. This, combined with the enormous array of sensors, allows the Immerge to capture all the details of a scene, including geometry and lighting. The resulting video, which needs to be captured and processed on a specially designed server that the camera needs, allows the viewer to move around the scene independently of the camera. Suffice to say that’s a big step up from the 360º video we’re used to seeing today and, I feel, is what 3D video should be.

The Immerge poses some rather interesting challenges however, both in terms of content production and its consumption. For starters it’s wildly different from any kind of professional camera currently available, one that doesn’t allow a crew to be anywhere near it whilst its filming (unless they want to be part of the scene). Lytro understands this and has made it remotely operable however that doesn’t detract from the fact that traditional filming techniques simply won’t work with the Immerge. Indeed this kind of camera demands a whole new way of thinking as you’re no longer in charge of where the viewer will be looking, nor where they’ll end up in a scene.

Similarly on the consumer end the Immerge relies on the burgeoning consumer VR industry in order to have an effective platform for it to really shine. This isn’t going to be a cinema style experience any time soon, the technology simply isn’t there, instead Immerge videos will likely be viewed by people at home on their Oculus Rifts or similar. There’s definitely a growing interest in this space by consumers, as I’ve detailed in the past, however for a device like the Immerge I’m not sure that’s enough. There’s potentially other possibilities that I’m not thinking of, like shooting on the Immerge and then editing everything down to a regular movie, which might make it more viable but i feel like that would be leaving so much of the Immerge’s potential at the door.

Despite all that though the Immerge does look like an impressive piece of kit and it will be able to do things that no other device is currently capable of doing. This pivot towards the professional video market could be the play that makes their struggle in the consumer market all worthwhile. We won’t have to wait long to see it either as Lytro has committed to the Immerge being publicly available in Q1 next year. Whether or not it resonates with the professional content creators and their consumers will be an interesting thing to see as the technology really does have a lot of promise.

windows_10-3840x2160

Windows 7 Ceasing Sales Next Year, Windows 10 Rocketing to Replace it.

The lukewarm reception that Windows 8 and 8.1 received meant that many customers held steadfast to their Windows 7 installations. Whilst it wasn’t a Vista level catastrophe it was still enough to cement the idea that every other version of Windows was worth skipping. At the same time however it also set the stage for making Windows 7 the new XP, opening up the potential for history to repeat itself many years down the line. This is something that Microsoft is keen to avoid, aggressively pursuing users and corporations alike to upgrade to Windows 10. That strategy appears to be working and Microsoft seems confident enough in the numbers to finally cut the cord with Windows 7, stopping sales of the operating system from October next year.

windows_10-3840x2160

It might sound like a minor point, indeed you haven’t been able to buy most retail versions of Windows 7 for about a year now, however it’s telling about how confident Microsoft is feeling about Windows 10. The decision to cut all versions but Windows 7 Pro from OEM offerings was due to the poor sales of 8/8.1, something which likely wouldn’t be improved with Windows 10 so close to release. The stellar reception that Windows 10 received, passing both of its beleaguered predecessors in under a month, gave Microsoft the confidence it needed put an end date to Windows 7 sales once and for all.

Of course this doesn’t mean that the current Windows 7 install base is going anywhere, it still has extended support until 2020. This is a little shorter than XP’s lifecycle was, 11 years vs 13 years, and subsequently Windows 10’s (in its current incanation) current lifespan is set to be shorter again at 10 years. Thankfully this will present fewer challenges to both consumers and enterprises alike, given that they share much of the same codebase under the hood. Still the majority of the growth in the Windows 10 marketshare has likely come from the consumer space rather than the enterprise.

This is most certainly the case among gamers with Windows 10 now representing a massive 27.64% of users on the Steam platform. Whilst that might sound unsurprising, PC gamers are the most likely to be on the latest technology, Windows 7 was widely regarded as being one of the best platforms for gaming. Windows 8 (and by extension Windows 10 since most of the criticisms apply to both versions) on the other hand was met with some rather harsh criticism about what it could mean for PC gaming. Of course here we are several years later PC gaming is stronger than ever and gamers are adopting the newer platform in droves.

For Microsoft, who’ve gone on record saying that Windows 10 is slated to be the last version of Windows ever, cutting off the flow of previous versions of Windows is critical to ensuring that their current flagship OS reaches critical mass quickly. The early success they’ve seen has given them some momentum however they’ll need an aggressive push over the holiday season in order to overcome the current slump they’re finding themselves in. It’s proven to be popular among early adopters however now comes the hard task of convincing everyone else that it’s worth the trouble of upgrading. The next couple quarters will be telling in that regard and will be key to ensuring Windows 10’s position as the defacto OS for a long time to come.

Magic Leap: Next Level Virtual Reality.

It’s rare that we see a technology come full circle like virtual reality has. Back in the 90s there was a surge of interest in it with the large, clunky Virtuality machines being found in arcades and pizza joints the world over. Then it fell by the wayside, the expensive machines and the death of the arcades cementing them as a 90s fad. However the last few years have seen a resurgence in interest in VR with numerous startups and big brands hoping to bring the technology to the consumer. For the most part they’re all basically the same however there’s one that’s getting some attention and when you see the demo below you’ll see why.

Taken at face value the above demo doesn’t really look like anything different from what current VR systems are capable of however there is one key difference: no reference cards or QR codes anywhere to be seen. Most VR works off some form of visual cue so that it can determine things like distance and position however Magic Leap’s system appears to have no such limitation. What’s interesting about this is that they’ve repurposed another technology in order to gather the required information. In the past I would’ve guessed a scanning IR laser or something similar but it’s actually a light-field sensor.

Just like the ones that power the Lytro and the Illum.

Light-field sensors differ from traditional camera sensors by being able to capture directional information about the light in addition to the brightness and colour. For the consumer grade cameras we’ve seen based on this technology it meant that pictures could be refocused after the image was taken and even given a subtle 3D effect. For Magic Leap however it appears that they’re using a light field sensor to map out the environment they’re in, providing them a 3D picture of what it’s looking at. Then, with that information, they can superimpose a 3D model and have it realistically interact with the world (like the robot disappearing behind the table leg and the solar system reflecting off the table).

Whilst Magic Leap’s plans might be a little more sky high than an entertainment device (it appears they want to be a successful version of Google Glass) that’s most certainly going to be where their primary market will be. Whilst we’ve welcomed smartphones into almost every aspect of our lives it seems that an always on, wearable device like this is still irksome enough that widespread adoption isn’t likely to happen. Still though even in that “niche” there’s a lot of potential for technology like this and I’m sure Magic Leap will have no trouble finding hordes of willing beta testers.

3D Printing With Rocks and String.

Ever since my own failed attempt to build a 3D printer I’ve been fascinated by the rapid progress that has been made in this field. In under a decade 3D printing has gone from a niche hobby, one that required numerous hours to get working, to a commodity service. The engineering work has then been translated to different fields and numerous materials beyond simple plastic. However every so often someone manages to do 3D printing in a way that I had honestly never thought of, like this project where they 3D print a sculpture using rocks and string:

Whilst it might not be the most automated or practical way to create sculptures it is by far one of the most novel. Like a traditional selective laser sinter printer each new layer is formed by piling a layer of material over the previous. This is then secured by placing string on top of it, forming the eventual shape of the sculpture. They call this material reversible concrete which is partly true, the aggregate they appear to be using looks like the stuff you’d use in concrete, however I doubt the structural properties match that of its more permanent brethren. Still though it’s an interesting idea that could have some wider applications outside the arts space.

nbn-smh

Labor’s Return to FTTP Scarred by the NBN’s MTM Past.

The current MTM NBN is by all accounts a total mess. Every single promise that the Liberal party has made with respect to it has been broken. First the guaranteed speed being delivered to the majority of Australians was scrapped. Then the timeline blew out as the FTTN trials took far longer to accomplish than they stated they would. Finally the cost of the network, widely described as being a third of the FTTP solution, has since ballooned to well above any cost estimate that preceded it. The slim sliver of hope that all us technologically inclined Australians hang on to is that this current government goes single term and that Labor would reintroduce the FTTP NBN in all its glory. Whilst it seems that Labor is committed to their original idea the future of Australia’s Internet will bear the scars of the Liberals term in office.

nbn-smh

Jason Clare, who’s picked up the Shadow Communications Minister position in the last Labor cabinet reshuffle before the next election, has stated that they’d ramp up the number of homes connected to fiber if they were successful at the next election. Whilst there’s no solid policy documents available yet to determine what that means Clare has clearly signalled that FTTN rollouts are on the way out. This is good news however it does mean that Australia’s Internet infrastructure won’t be the fiber heaven that it was once envisioned to be. Instead we will be left with a network that’s mostly fiber with pockets of Internet backwaters with little hope of change in the near future.

Essentially it would seem that Labor would keep current contract commitments which would mean a handful of FTTN sites would still be deployed and anyone on a HFC network would remain on them for the foreseeable future. Whilst these are currently serviceable their upgrade paths are far less clear than their fully fiber based brethren. This means that the money spent on upgrading the HFC networks, as well as any money spent on remediating copper to make FTTN work, is wasted capital that could have been invested in the superior fiber only solution. Labor isn’t to blame for this, I understand that breaking contractual commitments is something they’d like to avoid, but it shows just how much damage the Liberals MTM NBN plan has done to Australia’s technological future.

Unfortunately there’s really no fix for this, especially if you want something politically palatable.

If we’re serious about transitioning Australia away from the resources backed economy that’s powered us over the last decade investments like the FTTP NBN are what we are going to need. There’s clear relationships between Internet speeds and economic growth something which would quickly make the asking price look extremely reasonable. Doing it half-arsed with a cobbled together mix of technologies will only result in a poor experience, dampening any benefits that such a network could provide. The real solution, the one that will last us as long as our current copper network has, is to make it all fiber. Only then will we be able to accelerate our growth at the same rapid pace as the rest of the world is and only then will we see the full benefits of what a FTTP NBN can provide.

L16-FRONT

The Light-L16 Isn’t “DSLR Quality”.

It’s well known that the camera industry has been struggling for some time and the reason for that is simple: smartphones. There used to be a wide gap in quality between smartphones and dedicated cameras however that gap has closed significantly over the past couple years. Now the market segment that used to be dominated by a myriad of pocket cameras has all but evaporated. This has left something of a gap that some smaller companies have tried to fill like Lytro did with their quirky lightfield cameras. Light is the next company to attempt to revitalize the pocket camera market, albeit in a way (and at a price point) that’s likely to fall as flat as Lytro’s Illum did.

L16-FRONT

The Light-L16 is going to be their debut device, a pocket camera that contains no less than 16 independent camera modules scattered about its face. For any one picture up to 10 of these cameras can fire at once and, using their “computational photography” algorithms the L-16 can produce images of up to 52MP. On the back there’s a large touchscreen that’s powered by a custom version of Android M, allowing you to view and manipulate your photos with the full power of a Snapdragon 820 chip. All of this can be had for $1299 if you preorder soon or $1699 when it finally goes into full production. It sounds impressive, and indeed some of the images look great, however it’s not going to be DSLR quality, no matter how many camera modules they cram into it.

You see those modules they’re using are pulled from smartphones which means they share the same limitations. The sensors themselves are going to be tiny, around 1/10th the size of most DSLR cameras and half again smaller than full frames. The pixels on these sensors then are much smaller, meaning they capture less detail and perform worse in low light than DSLRs do. You can overcome some of these limitations through multiple image captures, like the L-16 is capable of, however that’s not going to give you the full 52MP that they claim due to computational losses. There are some neat tricks they can pull like adjusting the focus point (ala Lytro) after the photo is taken but as we’ve seen that’s not a killer feature for cameras to have.

Those modules are also arranged in a rather peculiar way, and I’m not talking about the way they’re laid out on the device. There’s 5 x 35mm, 5 x 70mm and 6 x 150mm. This is fine in and of itself however they can’t claim true optical zoom over that range as there’s no graduations between all those modules. Sure you can interpolate using the different lenses but that’s just a fancy way of saying digital zoom without the negative connotations that come with it. The hard fact of the matter is that you can’t have prime lenses and act like you have zooms at the same time, they’re just physically not the same thing.

Worst of all is the price which is already way above entry level DSLRs even if you purchase them new with a couple lenses. Sure I can understand form factor is a deal breaker here however this camera is over double the thickness of current smartphones. Add that to the fact that it’s a separate device and I don’t think people who are currently satisfied with their smartphones are going to pick one up just because. Just like the Lytro before it the L-16 is going to struggle to find a market outside of a tiny niche of camera tech enthusiasts, especially at the full retail price.

This may just sound like the rantings of a DSLR purist who likes nothing else, and in part it is, however I’m fine with experimental technology like this as long as it doesn’t make claims that don’t line up with reality. DSLRs are a step above other cameras in numerous regards mostly for the control they give you over how the image is crafted. Smartphones do what they do well and are by far the best platform for those who use them exclusively. The L-16 however is a halfway point between them, it will provide much better pictures than any smartphone but it will fall short of DSLRs. Thinking any differently means ignoring the fundamental differences that separates DSLRs and smartphone cameras, something which I simply can’t do.

Carbon Nanotube Transistors

Carbon Nanotubes Break Barriers for Moore’s Law.

In the last decade there’s been a move away from raw CPU speed as an indicator of performance. Back when single cores were the norm it was an easy way to judge which CPU would be faster than the other in a general sense however the switch to multiple cores threw this into question. Partly this comes from architecture decisions and software’s ability to make use of multiple cores but it also came hand in hand with a stalling CPU speeds. This is mostly a limitation of current technology as faster switching meant more heat, something most processors could not handle more of. This could be set to change however as research out IBM’s Thomas J. Watson Research Center proposes a new way of constructing transistors that overcomes that limitation.

Carbon Nanotube Transistors

Current day processors, whether they be the monsters powering servers or the small ones ticking away in your smartwatch, are all constructed through a process called photolithography. In this process a silicon wafer is covered in a photosensitive chemical and then exposed to light through a mask. This is what imprints the CPU pattern onto the blank silicon substrate, creating all the circuitry of a CPU. This process is what allows us to pack billions upon billions of transistors into a space little bigger than your thumbnail. However it has its limitations related to things like the wavelength of light used (higher frequencies are needed for smaller features) and the purity of the substrate. IBM’s research takes a very different approach by instead using carbon nanotubes as the transistor material and creating features by aligning and placing them rather than etching them in.

Essentially what IBM does is take a heap of carbon nanotubes, which in their native form are a large unordered mess, and then aligns them on top of a silicon wafer. When the nanotubes are placed correctly, like they are in the picture shown above, they form a transistor. Additionally the researchers have devised a method to attach electrical connectors onto these newly formed transistors in such a way that their electrical resistance is independent of their width. What this means is that the traditional limitation of increasing heat with increased frequency is now decoupled, allowing them to greatly reduce the size of the connectors potentially allowing for a boost in CPU frequency.

The main issue such technology faces is that it is radically different from the way we currently manufacture CPUs today. There’s a lot of investment in current lithography based fabs and this method likely can’t make use of that investment. So the challenge these researchers face is creating a scalable method with which they can produce chips based on this technology, hopefully in a way that can be adapted for use in current fabs. This is why you’re not likely to see processors based on this technology for some time, probably not for another 5 years at least according to the researchers.

What it does show though is that there is potential for Moore’s Law to continue for a long time into the future. It seems whenever we brush up against a fundamental limitation, one that has plagued us for decades, new research rears its head to show that it can be tackled. There’s every chance that carbon nanotubes won’t become the new transistor material of choice but insights like these are what will keep Moore’s Law trucking along.