Technology

windows_10-3840x2160

Windows 7 Ceasing Sales Next Year, Windows 10 Rocketing to Replace it.

The lukewarm reception that Windows 8 and 8.1 received meant that many customers held steadfast to their Windows 7 installations. Whilst it wasn’t a Vista level catastrophe it was still enough to cement the idea that every other version of Windows was worth skipping. At the same time however it also set the stage for making Windows 7 the new XP, opening up the potential for history to repeat itself many years down the line. This is something that Microsoft is keen to avoid, aggressively pursuing users and corporations alike to upgrade to Windows 10. That strategy appears to be working and Microsoft seems confident enough in the numbers to finally cut the cord with Windows 7, stopping sales of the operating system from October next year.

windows_10-3840x2160

It might sound like a minor point, indeed you haven’t been able to buy most retail versions of Windows 7 for about a year now, however it’s telling about how confident Microsoft is feeling about Windows 10. The decision to cut all versions but Windows 7 Pro from OEM offerings was due to the poor sales of 8/8.1, something which likely wouldn’t be improved with Windows 10 so close to release. The stellar reception that Windows 10 received, passing both of its beleaguered predecessors in under a month, gave Microsoft the confidence it needed put an end date to Windows 7 sales once and for all.

Of course this doesn’t mean that the current Windows 7 install base is going anywhere, it still has extended support until 2020. This is a little shorter than XP’s lifecycle was, 11 years vs 13 years, and subsequently Windows 10’s (in its current incanation) current lifespan is set to be shorter again at 10 years. Thankfully this will present fewer challenges to both consumers and enterprises alike, given that they share much of the same codebase under the hood. Still the majority of the growth in the Windows 10 marketshare has likely come from the consumer space rather than the enterprise.

This is most certainly the case among gamers with Windows 10 now representing a massive 27.64% of users on the Steam platform. Whilst that might sound unsurprising, PC gamers are the most likely to be on the latest technology, Windows 7 was widely regarded as being one of the best platforms for gaming. Windows 8 (and by extension Windows 10 since most of the criticisms apply to both versions) on the other hand was met with some rather harsh criticism about what it could mean for PC gaming. Of course here we are several years later PC gaming is stronger than ever and gamers are adopting the newer platform in droves.

For Microsoft, who’ve gone on record saying that Windows 10 is slated to be the last version of Windows ever, cutting off the flow of previous versions of Windows is critical to ensuring that their current flagship OS reaches critical mass quickly. The early success they’ve seen has given them some momentum however they’ll need an aggressive push over the holiday season in order to overcome the current slump they’re finding themselves in. It’s proven to be popular among early adopters however now comes the hard task of convincing everyone else that it’s worth the trouble of upgrading. The next couple quarters will be telling in that regard and will be key to ensuring Windows 10’s position as the defacto OS for a long time to come.

Magic Leap: Next Level Virtual Reality.

It’s rare that we see a technology come full circle like virtual reality has. Back in the 90s there was a surge of interest in it with the large, clunky Virtuality machines being found in arcades and pizza joints the world over. Then it fell by the wayside, the expensive machines and the death of the arcades cementing them as a 90s fad. However the last few years have seen a resurgence in interest in VR with numerous startups and big brands hoping to bring the technology to the consumer. For the most part they’re all basically the same however there’s one that’s getting some attention and when you see the demo below you’ll see why.

Taken at face value the above demo doesn’t really look like anything different from what current VR systems are capable of however there is one key difference: no reference cards or QR codes anywhere to be seen. Most VR works off some form of visual cue so that it can determine things like distance and position however Magic Leap’s system appears to have no such limitation. What’s interesting about this is that they’ve repurposed another technology in order to gather the required information. In the past I would’ve guessed a scanning IR laser or something similar but it’s actually a light-field sensor.

Just like the ones that power the Lytro and the Illum.

Light-field sensors differ from traditional camera sensors by being able to capture directional information about the light in addition to the brightness and colour. For the consumer grade cameras we’ve seen based on this technology it meant that pictures could be refocused after the image was taken and even given a subtle 3D effect. For Magic Leap however it appears that they’re using a light field sensor to map out the environment they’re in, providing them a 3D picture of what it’s looking at. Then, with that information, they can superimpose a 3D model and have it realistically interact with the world (like the robot disappearing behind the table leg and the solar system reflecting off the table).

Whilst Magic Leap’s plans might be a little more sky high than an entertainment device (it appears they want to be a successful version of Google Glass) that’s most certainly going to be where their primary market will be. Whilst we’ve welcomed smartphones into almost every aspect of our lives it seems that an always on, wearable device like this is still irksome enough that widespread adoption isn’t likely to happen. Still though even in that “niche” there’s a lot of potential for technology like this and I’m sure Magic Leap will have no trouble finding hordes of willing beta testers.

3D Printing With Rocks and String.

Ever since my own failed attempt to build a 3D printer I’ve been fascinated by the rapid progress that has been made in this field. In under a decade 3D printing has gone from a niche hobby, one that required numerous hours to get working, to a commodity service. The engineering work has then been translated to different fields and numerous materials beyond simple plastic. However every so often someone manages to do 3D printing in a way that I had honestly never thought of, like this project where they 3D print a sculpture using rocks and string:

Whilst it might not be the most automated or practical way to create sculptures it is by far one of the most novel. Like a traditional selective laser sinter printer each new layer is formed by piling a layer of material over the previous. This is then secured by placing string on top of it, forming the eventual shape of the sculpture. They call this material reversible concrete which is partly true, the aggregate they appear to be using looks like the stuff you’d use in concrete, however I doubt the structural properties match that of its more permanent brethren. Still though it’s an interesting idea that could have some wider applications outside the arts space.

nbn-smh

Labor’s Return to FTTP Scarred by the NBN’s MTM Past.

The current MTM NBN is by all accounts a total mess. Every single promise that the Liberal party has made with respect to it has been broken. First the guaranteed speed being delivered to the majority of Australians was scrapped. Then the timeline blew out as the FTTN trials took far longer to accomplish than they stated they would. Finally the cost of the network, widely described as being a third of the FTTP solution, has since ballooned to well above any cost estimate that preceded it. The slim sliver of hope that all us technologically inclined Australians hang on to is that this current government goes single term and that Labor would reintroduce the FTTP NBN in all its glory. Whilst it seems that Labor is committed to their original idea the future of Australia’s Internet will bear the scars of the Liberals term in office.

nbn-smh

Jason Clare, who’s picked up the Shadow Communications Minister position in the last Labor cabinet reshuffle before the next election, has stated that they’d ramp up the number of homes connected to fiber if they were successful at the next election. Whilst there’s no solid policy documents available yet to determine what that means Clare has clearly signalled that FTTN rollouts are on the way out. This is good news however it does mean that Australia’s Internet infrastructure won’t be the fiber heaven that it was once envisioned to be. Instead we will be left with a network that’s mostly fiber with pockets of Internet backwaters with little hope of change in the near future.

Essentially it would seem that Labor would keep current contract commitments which would mean a handful of FTTN sites would still be deployed and anyone on a HFC network would remain on them for the foreseeable future. Whilst these are currently serviceable their upgrade paths are far less clear than their fully fiber based brethren. This means that the money spent on upgrading the HFC networks, as well as any money spent on remediating copper to make FTTN work, is wasted capital that could have been invested in the superior fiber only solution. Labor isn’t to blame for this, I understand that breaking contractual commitments is something they’d like to avoid, but it shows just how much damage the Liberals MTM NBN plan has done to Australia’s technological future.

Unfortunately there’s really no fix for this, especially if you want something politically palatable.

If we’re serious about transitioning Australia away from the resources backed economy that’s powered us over the last decade investments like the FTTP NBN are what we are going to need. There’s clear relationships between Internet speeds and economic growth something which would quickly make the asking price look extremely reasonable. Doing it half-arsed with a cobbled together mix of technologies will only result in a poor experience, dampening any benefits that such a network could provide. The real solution, the one that will last us as long as our current copper network has, is to make it all fiber. Only then will we be able to accelerate our growth at the same rapid pace as the rest of the world is and only then will we see the full benefits of what a FTTP NBN can provide.

L16-FRONT

The Light-L16 Isn’t “DSLR Quality”.

It’s well known that the camera industry has been struggling for some time and the reason for that is simple: smartphones. There used to be a wide gap in quality between smartphones and dedicated cameras however that gap has closed significantly over the past couple years. Now the market segment that used to be dominated by a myriad of pocket cameras has all but evaporated. This has left something of a gap that some smaller companies have tried to fill like Lytro did with their quirky lightfield cameras. Light is the next company to attempt to revitalize the pocket camera market, albeit in a way (and at a price point) that’s likely to fall as flat as Lytro’s Illum did.

L16-FRONT

The Light-L16 is going to be their debut device, a pocket camera that contains no less than 16 independent camera modules scattered about its face. For any one picture up to 10 of these cameras can fire at once and, using their “computational photography” algorithms the L-16 can produce images of up to 52MP. On the back there’s a large touchscreen that’s powered by a custom version of Android M, allowing you to view and manipulate your photos with the full power of a Snapdragon 820 chip. All of this can be had for $1299 if you preorder soon or $1699 when it finally goes into full production. It sounds impressive, and indeed some of the images look great, however it’s not going to be DSLR quality, no matter how many camera modules they cram into it.

You see those modules they’re using are pulled from smartphones which means they share the same limitations. The sensors themselves are going to be tiny, around 1/10th the size of most DSLR cameras and half again smaller than full frames. The pixels on these sensors then are much smaller, meaning they capture less detail and perform worse in low light than DSLRs do. You can overcome some of these limitations through multiple image captures, like the L-16 is capable of, however that’s not going to give you the full 52MP that they claim due to computational losses. There are some neat tricks they can pull like adjusting the focus point (ala Lytro) after the photo is taken but as we’ve seen that’s not a killer feature for cameras to have.

Those modules are also arranged in a rather peculiar way, and I’m not talking about the way they’re laid out on the device. There’s 5 x 35mm, 5 x 70mm and 6 x 150mm. This is fine in and of itself however they can’t claim true optical zoom over that range as there’s no graduations between all those modules. Sure you can interpolate using the different lenses but that’s just a fancy way of saying digital zoom without the negative connotations that come with it. The hard fact of the matter is that you can’t have prime lenses and act like you have zooms at the same time, they’re just physically not the same thing.

Worst of all is the price which is already way above entry level DSLRs even if you purchase them new with a couple lenses. Sure I can understand form factor is a deal breaker here however this camera is over double the thickness of current smartphones. Add that to the fact that it’s a separate device and I don’t think people who are currently satisfied with their smartphones are going to pick one up just because. Just like the Lytro before it the L-16 is going to struggle to find a market outside of a tiny niche of camera tech enthusiasts, especially at the full retail price.

This may just sound like the rantings of a DSLR purist who likes nothing else, and in part it is, however I’m fine with experimental technology like this as long as it doesn’t make claims that don’t line up with reality. DSLRs are a step above other cameras in numerous regards mostly for the control they give you over how the image is crafted. Smartphones do what they do well and are by far the best platform for those who use them exclusively. The L-16 however is a halfway point between them, it will provide much better pictures than any smartphone but it will fall short of DSLRs. Thinking any differently means ignoring the fundamental differences that separates DSLRs and smartphone cameras, something which I simply can’t do.

Carbon Nanotube Transistors

Carbon Nanotubes Break Barriers for Moore’s Law.

In the last decade there’s been a move away from raw CPU speed as an indicator of performance. Back when single cores were the norm it was an easy way to judge which CPU would be faster than the other in a general sense however the switch to multiple cores threw this into question. Partly this comes from architecture decisions and software’s ability to make use of multiple cores but it also came hand in hand with a stalling CPU speeds. This is mostly a limitation of current technology as faster switching meant more heat, something most processors could not handle more of. This could be set to change however as research out IBM’s Thomas J. Watson Research Center proposes a new way of constructing transistors that overcomes that limitation.

Carbon Nanotube Transistors

Current day processors, whether they be the monsters powering servers or the small ones ticking away in your smartwatch, are all constructed through a process called photolithography. In this process a silicon wafer is covered in a photosensitive chemical and then exposed to light through a mask. This is what imprints the CPU pattern onto the blank silicon substrate, creating all the circuitry of a CPU. This process is what allows us to pack billions upon billions of transistors into a space little bigger than your thumbnail. However it has its limitations related to things like the wavelength of light used (higher frequencies are needed for smaller features) and the purity of the substrate. IBM’s research takes a very different approach by instead using carbon nanotubes as the transistor material and creating features by aligning and placing them rather than etching them in.

Essentially what IBM does is take a heap of carbon nanotubes, which in their native form are a large unordered mess, and then aligns them on top of a silicon wafer. When the nanotubes are placed correctly, like they are in the picture shown above, they form a transistor. Additionally the researchers have devised a method to attach electrical connectors onto these newly formed transistors in such a way that their electrical resistance is independent of their width. What this means is that the traditional limitation of increasing heat with increased frequency is now decoupled, allowing them to greatly reduce the size of the connectors potentially allowing for a boost in CPU frequency.

The main issue such technology faces is that it is radically different from the way we currently manufacture CPUs today. There’s a lot of investment in current lithography based fabs and this method likely can’t make use of that investment. So the challenge these researchers face is creating a scalable method with which they can produce chips based on this technology, hopefully in a way that can be adapted for use in current fabs. This is why you’re not likely to see processors based on this technology for some time, probably not for another 5 years at least according to the researchers.

What it does show though is that there is potential for Moore’s Law to continue for a long time into the future. It seems whenever we brush up against a fundamental limitation, one that has plagued us for decades, new research rears its head to show that it can be tackled. There’s every chance that carbon nanotubes won’t become the new transistor material of choice but insights like these are what will keep Moore’s Law trucking along.

UNSW Qubit in Silicon

Quantum Computing Comes to Silicon.

Traditional computing is bound in binary data, the world of zeroes and ones. This constraint was originally born out of a engineering limitation, designed to ensure that these different states could be easily represented by differing voltage levels. This hasn’t proved to be much of a limiting factor in the progress that computing has made however but there are different styles of computing which make use of more than just those zeroes and ones. The most notable one is quantum computing which is able to represent an exponential amount of states depending on the number of qubits (analogous to transistors) that the quantum chip has. Whilst there have been some examples of quantum computers hitting the market, even if their quantum-ness is still in question, they are typically based on exotic materials meaning mass production of them is tricky. This could change with the latest research to come out of the University of New South Wales as they’ve made an incredible breakthrough.

UNSW Qubit in Silicon

Back in 2012 the team at UNSW demonstrated that they could build a single qubit in silicon. This by itself was an amazing discovery as previously created qubits were usually reliant on materials like niobium cooled to superconducting temperatures to achieve their quantum state. However a single qubit isn’t exactly useful on its own and so the researchers were tasked with getting their qubits talking to each other. This is a lot harder than you’d think as qubits don’t communicate in the same way that regular transistors do and so traditional techniques for connecting things in silicon won’t work. So after 3 years worth of research UNSW’s quantum computing team has finally cracked it and allowed two qubits made in silicon to communicate.

This has allowed them to build a quantum logic gate, the fundamental building block for a larger scale quantum computer. One thing that will be interesting to see is how their system scales out with additional qubits. It’s one thing to get two qubits talking together, indeed there’s been several (non-silicon) examples of that in the past, however as you scale up the number of qubits things start to get a lot more difficult. This is because larger numbers of qubits are more prone to quantum decoherence and typically require additional circuitry to overcome it. Whilst they might be able to mass produce a chip with a large number of qubits it might not be of any use if the qubits can’t stay in coherence.

It will be interesting to see what applications their particular kind of quantum chip will have once they build a larger scale version of it. Currently the commercially available quantum computers from D-Wave are limited to a specific problem space called quantum annealing and, as of yet, have failed to conclusively prove that they’re achieving a quantum speedup. The problem is larger than just D-Wave however as there is still some debate about how we classify quantum speedup and how to properly compare it to more traditional methods. Still this is an issue that UNSW’s potential future chip will have to face should it come to market.

We’re still a long way off from seeing a generalized quantum computer hitting the market any time soon but achievements like those coming out of UNSW are crucial in making them a reality. We have a lot of investment in developing computers on silicon and if those investments can be directly translated to quantum computing then it’s highly likely that we’ll see a lot of success. I’m sure the researchers are going to have several big chip companies knocking down their doors to get a license for this tech as it really does have a lot of potential.

Freevolt

Freevolt: Yet Again “Free Energy” Rears Its Ugly Head.

Our world is dominated by devices that need to be plugged in on a regular basis, a necessary tedium for the ever connected lifestyle many of us now lead. Doing away with that is an appealing idea, leaving the cords for things that never move. That idea won’t become reality any time soon however, due to the challenges we face in miniaturization of power generation and storage. That, of course, hasn’t stopped numerous companies from saying that they have done so with the most recent batch purporting to be able to harvest energy from wireless signals. The latest company to do this is called Freevolt and unfortunately their PR department has fallen prey to the superlative claims that many of its predecessors have.

Freevolt

Their idea is the same as pretty much all the other free energy ideas that have cropped up over the past couple of years. Essentially their device (which shares the company’s name) has a couple different antennas on it which can harvest electromagnetic waves and transform them into energy. Unlike other devices, which typically were some kind of purpose built thing that just “never needed recharging”, Freevolt wants to be the platform on which developers build devices that use their technology. Their website showcases numerous applications that they believe their device will be able to power including things like wearables and smoke alarms. The only current application of their technology though is the CleanSpace tag which, as of writing, is not available.

Had Freevolt constrained their marketing spiel to just ultra low power things like individual sensors I would’ve let it slide however they’re not just claiming that. The fact of the matter is that numerous devices which they claim could be powered by this tech simply couldn’t be, especially with their current form factors. Their website clearly shows something like a health tracker which is far too small to contain the required antennas and electronics, not to mention that their power requirements are far above the 100 microwatts they claim they can generate. Indeed even devices that could integrate the technology, like a smoke alarm, would still have current draws above what this device could provide.

To be fair their whitepaper makes far more tempered claims about what their device is capable of, mostly aimed at extending battery life rather than outright replacing it. However, whilst such claims might be realistic, they fail to account for the fact that many of the same benefits they’re purporting could likely be achieve by simply adding another battery to the device. I don’t know how much their device will cost but I’d hazard a guess that it’d cost a lot more than adding in an additional battery pack. This is all based on the assumption that the device operates in an environment that’s heavy enough in RF to charge the device at its optimal rate, something which I don’t think will hold true in enough cases to make it viable.

I seriously don’t understand why companies continue to pursue ideas like this as they have either turn out to be completely farcical, infeasible or simply just not economically viable. Sure there is energy to be harvested from EM waves but the energy is so low that the cost of acquiring that energy is far beyond any of the alternatives. Freevolt might think they’re onto something but the second they start shipping their dev kit I can guarantee the field results will be nothing like what they’re purporting. Not that that will discourage anyone from trying it again though as it seems there’s always another fool willing to be parted with their money.

amd_lonestar_campus_bizjournals1

Microsoft Rumoured to be Looking to Acquire AMD.

The last decade has not been kind to AMD. It used to be a company that was readily comparable to Intel in almost every way, having much the same infrastructure (including chip fabs) whilst producing products that were readily comparable. Today however they’re really only competitive in the low end space, surviving mostly on revenues from the sales of both of the current generation of games consoles. Now with their market cap hovering at the $1.5 billion mark rumours are beginning to swirl about a potential takeover bid, something numerous companies could do at such a cheap price. The latest rumours point towards Microsoft and, in my humble opinion, an acquisition from them would be a mixed bag for both involved.

amd_lonestar_campus_bizjournals1

The rumour surfaced from an article on Fudzilla citing “industry sources” on the matter, so there’s potential that this will amount to nothing more than just a rumour. Still talks of an AMD acquisition by another company have been swirling for some time now however so the idea isn’t exactly new. Indeed AMD’s steadily declining stock price, one that has failed to recover ever since its peak shortly after it spun off Global Foundries, has made this a possibility for some time now. A buyer hasn’t been forthcoming however but let’s entertain the idea that Microsoft is interested to see where it leads us.

As Microsoft begins to expand itself further into the devices market there’s some of potential in owning the chip design process. They’re already using an AMD chip for the current generation console and, with total control over the chip design process, there’s every chance that they’d use one for a future device. There’s similar potential for the Surface however AMD has never been the greatest player in the low power space, so there’d likely need to be some innovation on their part to make that happen. Additionally there’s no real solid offering from AMD in the mobile space, ruling out their use in the Lumia line of devices. Based just on chips alone I don’t think Microsoft would go for it, especially with the x86 licensing deal that the previous article I linked to mentions.

Always of interest to any party though will be AMD’s warchest of patents, some 10,000 of them. Whilst the revenue from said patents isn’t substantial (at least I can’t find any solid figures on it, which means it isn’t much) they always have value when the lawsuits start coming down. For a company that has billions sitting in reserve those patents might well be worth AMD’s market cap, even with a hefty premium on top of it. If that’s the only value that an acquisition will offer however I can’t imagine AMD, as a company, sticking around for long afterwards unfortunately.

Of course neither company has commented on the rumour and, as of yet, there isn’t any other sources confirming this rumour. Considering the rather murky value proposition that such an acquisition offers both companies I honestly have trouble believing it myself. Still the idea of AMD getting taken over seems to come up more often than it used to so I wouldn’t put it past them courting offers from anyone and everyone that will hear them. Suffice to say AMD has been in need of a saviour for some time now, it might just not end up being Microsoft at this point.

smart_keyboard_large

iPad Pro: Imitation is the Most Sincere Form of Flattery.

Apple are the kings of taking what appears to be failed product ideas and turning them into gold mines. The iPhone took the smartphone market from a niche market of the geeky and technical elite into a worldwide sensation that continues today. The iPad managed to make tablet computing popular, even after both Apple and Microsoft tried to crack the elusive market. However the last few years haven’t seen a repeat of those moments with the last attempt, the Apple Watch, failing to become the sensation many believed it would be. Indeed their latest attempt, the iPad Pro and its host of attachments, feels like simple mimicry more than anything else.

smart_keyboard_large

The iPad Pro is a not-quite 13″ device that’s sporting all the features you’d expect in a device of that class. Apple mentions that the new 64bit A9X chip that’s powering it is “desktop class” able to bring a 1.8X CPU performance and 2X graphics performance improvement over the previous iPad Air 2. There’s also the huge display which allows you to run two iPad applications side by side, apparently with no compromises on experience. Alongside the iPad Pro Apple has released two accessories: the smart keyboard, which makes use of the new connector on the side of the iPad, and the Apple Pencil, an active stylus. Whilst all these things would make you think it was a laptop replacement it’s running iOS, meaning it’s still in the same category as its lower powered brethren.

If this is all sounding strangely familiar to you it’s because they’re basically selling an iOS version of the Surface Pro.

Now there’s nothing wrong with copying competitors, all the big players have been doing that for so long that even the courts struggle to agree on who was there first, however the iPad Pro feels like a desperate attempt to capture the Surface Pro’s market. Many analysts lump the Surface and the iPad into the same category however that’s not really the case: the iPad is a tablet and the Surface is a laptop replacement. If you compare the Surface Pro to the Macbook though you can see why Apple created the iPad Pro, their total Mac sales are on the order of $6 billion spread across no less than 7 different hardware lines. Microsoft’s Surface on the other hand has made $1 billion in a quarter from just the Surface alone, a significant chunk of sales that I doubt Apple has managed to make with just the Macbook alone. Thus they bring out a competitor that is almost a blow for blow replica of its main competitor.

However the problem with the iPad Pro isn’t the mimicry, it’s the last step they didn’t take to make the copy complete: putting a desktop OS on it. Whilst it’s clear that Apple’s plan is to eventually unify their whole range of products under the iOS banner not putting the iPad Pro on OSX puts it at a significant disadvantage. Sure the hardware is slightly better than the Surface is but that’s all for naught if you can’t do anything with it. Sure there’s a few apps on there but iOS, and the products that it’s based on, have always been focused on consumption rather than production. OSX on the other hand is an operating system focused on productivity, something that the iPad Pro needs in order to realise its full potential. It’s either that or iOS needs to see some significant rework in order to make the iPad Pro the laptop replacement that the Surface Pro is.

It’s clear that Apple needs to do something in order to re-energize the iPad market, with the sales figures being down both in current quarters and year on year, however I don’t believe that the iPad Pro will do it for them. The new ultra slim Macbook has already cannibalized part of the iPad’s market and this new iPad Pro is going to end up playing in the same space. However for those seeking some form of portable desktop environment in the Apple ecosystem I’m failing to see why you’d choose an iPad Pro over the Macbook. Had they gone with OSX the value proposition would’ve been far more clear however this feels like a token attempt to capture the Surface Pro market and I just don’t think it will work out.