UNSW Qubit in Silicon

Quantum Computing Comes to Silicon.

Traditional computing is bound in binary data, the world of zeroes and ones. This constraint was originally born out of a engineering limitation, designed to ensure that these different states could be easily represented by differing voltage levels. This hasn’t proved to be much of a limiting factor in the progress that computing has made however but there are different styles of computing which make use of more than just those zeroes and ones. The most notable one is quantum computing which is able to represent an exponential amount of states depending on the number of qubits (analogous to transistors) that the quantum chip has. Whilst there have been some examples of quantum computers hitting the market, even if their quantum-ness is still in question, they are typically based on exotic materials meaning mass production of them is tricky. This could change with the latest research to come out of the University of New South Wales as they’ve made an incredible breakthrough.

UNSW Qubit in Silicon

Back in 2012 the team at UNSW demonstrated that they could build a single qubit in silicon. This by itself was an amazing discovery as previously created qubits were usually reliant on materials like niobium cooled to superconducting temperatures to achieve their quantum state. However a single qubit isn’t exactly useful on its own and so the researchers were tasked with getting their qubits talking to each other. This is a lot harder than you’d think as qubits don’t communicate in the same way that regular transistors do and so traditional techniques for connecting things in silicon won’t work. So after 3 years worth of research UNSW’s quantum computing team has finally cracked it and allowed two qubits made in silicon to communicate.

This has allowed them to build a quantum logic gate, the fundamental building block for a larger scale quantum computer. One thing that will be interesting to see is how their system scales out with additional qubits. It’s one thing to get two qubits talking together, indeed there’s been several (non-silicon) examples of that in the past, however as you scale up the number of qubits things start to get a lot more difficult. This is because larger numbers of qubits are more prone to quantum decoherence and typically require additional circuitry to overcome it. Whilst they might be able to mass produce a chip with a large number of qubits it might not be of any use if the qubits can’t stay in coherence.

It will be interesting to see what applications their particular kind of quantum chip will have once they build a larger scale version of it. Currently the commercially available quantum computers from D-Wave are limited to a specific problem space called quantum annealing and, as of yet, have failed to conclusively prove that they’re achieving a quantum speedup. The problem is larger than just D-Wave however as there is still some debate about how we classify quantum speedup and how to properly compare it to more traditional methods. Still this is an issue that UNSW’s potential future chip will have to face should it come to market.

We’re still a long way off from seeing a generalized quantum computer hitting the market any time soon but achievements like those coming out of UNSW are crucial in making them a reality. We have a lot of investment in developing computers on silicon and if those investments can be directly translated to quantum computing then it’s highly likely that we’ll see a lot of success. I’m sure the researchers are going to have several big chip companies knocking down their doors to get a license for this tech as it really does have a lot of potential.


Freevolt: Yet Again “Free Energy” Rears Its Ugly Head.

Our world is dominated by devices that need to be plugged in on a regular basis, a necessary tedium for the ever connected lifestyle many of us now lead. Doing away with that is an appealing idea, leaving the cords for things that never move. That idea won’t become reality any time soon however, due to the challenges we face in miniaturization of power generation and storage. That, of course, hasn’t stopped numerous companies from saying that they have done so with the most recent batch purporting to be able to harvest energy from wireless signals. The latest company to do this is called Freevolt and unfortunately their PR department has fallen prey to the superlative claims that many of its predecessors have.


Their idea is the same as pretty much all the other free energy ideas that have cropped up over the past couple of years. Essentially their device (which shares the company’s name) has a couple different antennas on it which can harvest electromagnetic waves and transform them into energy. Unlike other devices, which typically were some kind of purpose built thing that just “never needed recharging”, Freevolt wants to be the platform on which developers build devices that use their technology. Their website showcases numerous applications that they believe their device will be able to power including things like wearables and smoke alarms. The only current application of their technology though is the CleanSpace tag which, as of writing, is not available.

Had Freevolt constrained their marketing spiel to just ultra low power things like individual sensors I would’ve let it slide however they’re not just claiming that. The fact of the matter is that numerous devices which they claim could be powered by this tech simply couldn’t be, especially with their current form factors. Their website clearly shows something like a health tracker which is far too small to contain the required antennas and electronics, not to mention that their power requirements are far above the 100 microwatts they claim they can generate. Indeed even devices that could integrate the technology, like a smoke alarm, would still have current draws above what this device could provide.

To be fair their whitepaper makes far more tempered claims about what their device is capable of, mostly aimed at extending battery life rather than outright replacing it. However, whilst such claims might be realistic, they fail to account for the fact that many of the same benefits they’re purporting could likely be achieve by simply adding another battery to the device. I don’t know how much their device will cost but I’d hazard a guess that it’d cost a lot more than adding in an additional battery pack. This is all based on the assumption that the device operates in an environment that’s heavy enough in RF to charge the device at its optimal rate, something which I don’t think will hold true in enough cases to make it viable.

I seriously don’t understand why companies continue to pursue ideas like this as they have either turn out to be completely farcical, infeasible or simply just not economically viable. Sure there is energy to be harvested from EM waves but the energy is so low that the cost of acquiring that energy is far beyond any of the alternatives. Freevolt might think they’re onto something but the second they start shipping their dev kit I can guarantee the field results will be nothing like what they’re purporting. Not that that will discourage anyone from trying it again though as it seems there’s always another fool willing to be parted with their money.


Microsoft Rumoured to be Looking to Acquire AMD.

The last decade has not been kind to AMD. It used to be a company that was readily comparable to Intel in almost every way, having much the same infrastructure (including chip fabs) whilst producing products that were readily comparable. Today however they’re really only competitive in the low end space, surviving mostly on revenues from the sales of both of the current generation of games consoles. Now with their market cap hovering at the $1.5 billion mark rumours are beginning to swirl about a potential takeover bid, something numerous companies could do at such a cheap price. The latest rumours point towards Microsoft and, in my humble opinion, an acquisition from them would be a mixed bag for both involved.


The rumour surfaced from an article on Fudzilla citing “industry sources” on the matter, so there’s potential that this will amount to nothing more than just a rumour. Still talks of an AMD acquisition by another company have been swirling for some time now however so the idea isn’t exactly new. Indeed AMD’s steadily declining stock price, one that has failed to recover ever since its peak shortly after it spun off Global Foundries, has made this a possibility for some time now. A buyer hasn’t been forthcoming however but let’s entertain the idea that Microsoft is interested to see where it leads us.

As Microsoft begins to expand itself further into the devices market there’s some of potential in owning the chip design process. They’re already using an AMD chip for the current generation console and, with total control over the chip design process, there’s every chance that they’d use one for a future device. There’s similar potential for the Surface however AMD has never been the greatest player in the low power space, so there’d likely need to be some innovation on their part to make that happen. Additionally there’s no real solid offering from AMD in the mobile space, ruling out their use in the Lumia line of devices. Based just on chips alone I don’t think Microsoft would go for it, especially with the x86 licensing deal that the previous article I linked to mentions.

Always of interest to any party though will be AMD’s warchest of patents, some 10,000 of them. Whilst the revenue from said patents isn’t substantial (at least I can’t find any solid figures on it, which means it isn’t much) they always have value when the lawsuits start coming down. For a company that has billions sitting in reserve those patents might well be worth AMD’s market cap, even with a hefty premium on top of it. If that’s the only value that an acquisition will offer however I can’t imagine AMD, as a company, sticking around for long afterwards unfortunately.

Of course neither company has commented on the rumour and, as of yet, there isn’t any other sources confirming this rumour. Considering the rather murky value proposition that such an acquisition offers both companies I honestly have trouble believing it myself. Still the idea of AMD getting taken over seems to come up more often than it used to so I wouldn’t put it past them courting offers from anyone and everyone that will hear them. Suffice to say AMD has been in need of a saviour for some time now, it might just not end up being Microsoft at this point.


iPad Pro: Imitation is the Most Sincere Form of Flattery.

Apple are the kings of taking what appears to be failed product ideas and turning them into gold mines. The iPhone took the smartphone market from a niche market of the geeky and technical elite into a worldwide sensation that continues today. The iPad managed to make tablet computing popular, even after both Apple and Microsoft tried to crack the elusive market. However the last few years haven’t seen a repeat of those moments with the last attempt, the Apple Watch, failing to become the sensation many believed it would be. Indeed their latest attempt, the iPad Pro and its host of attachments, feels like simple mimicry more than anything else.


The iPad Pro is a not-quite 13″ device that’s sporting all the features you’d expect in a device of that class. Apple mentions that the new 64bit A9X chip that’s powering it is “desktop class” able to bring a 1.8X CPU performance and 2X graphics performance improvement over the previous iPad Air 2. There’s also the huge display which allows you to run two iPad applications side by side, apparently with no compromises on experience. Alongside the iPad Pro Apple has released two accessories: the smart keyboard, which makes use of the new connector on the side of the iPad, and the Apple Pencil, an active stylus. Whilst all these things would make you think it was a laptop replacement it’s running iOS, meaning it’s still in the same category as its lower powered brethren.

If this is all sounding strangely familiar to you it’s because they’re basically selling an iOS version of the Surface Pro.

Now there’s nothing wrong with copying competitors, all the big players have been doing that for so long that even the courts struggle to agree on who was there first, however the iPad Pro feels like a desperate attempt to capture the Surface Pro’s market. Many analysts lump the Surface and the iPad into the same category however that’s not really the case: the iPad is a tablet and the Surface is a laptop replacement. If you compare the Surface Pro to the Macbook though you can see why Apple created the iPad Pro, their total Mac sales are on the order of $6 billion spread across no less than 7 different hardware lines. Microsoft’s Surface on the other hand has made $1 billion in a quarter from just the Surface alone, a significant chunk of sales that I doubt Apple has managed to make with just the Macbook alone. Thus they bring out a competitor that is almost a blow for blow replica of its main competitor.

However the problem with the iPad Pro isn’t the mimicry, it’s the last step they didn’t take to make the copy complete: putting a desktop OS on it. Whilst it’s clear that Apple’s plan is to eventually unify their whole range of products under the iOS banner not putting the iPad Pro on OSX puts it at a significant disadvantage. Sure the hardware is slightly better than the Surface is but that’s all for naught if you can’t do anything with it. Sure there’s a few apps on there but iOS, and the products that it’s based on, have always been focused on consumption rather than production. OSX on the other hand is an operating system focused on productivity, something that the iPad Pro needs in order to realise its full potential. It’s either that or iOS needs to see some significant rework in order to make the iPad Pro the laptop replacement that the Surface Pro is.

It’s clear that Apple needs to do something in order to re-energize the iPad market, with the sales figures being down both in current quarters and year on year, however I don’t believe that the iPad Pro will do it for them. The new ultra slim Macbook has already cannibalized part of the iPad’s market and this new iPad Pro is going to end up playing in the same space. However for those seeking some form of portable desktop environment in the Apple ecosystem I’m failing to see why you’d choose an iPad Pro over the Macbook. Had they gone with OSX the value proposition would’ve been far more clear however this feels like a token attempt to capture the Surface Pro market and I just don’t think it will work out.


Data Sovereignty, Cloud Services and the Folly of the USA’s Borderless Jurisdiction.

If you’ve worked in IT with a government organisation you’ll know the term “data sovereignty”. For those who haven’t had the pleasure the term refers to the laws that apply to data in the location that it’s stored in. When dealing with government entities this means that service providers have to make guarantees that the data won’t leave the Australian shores. Because, if it did, then the data wouldn’t be subject to Australian law any more and whatever government got a hold of it would be outside Australia’s jurisdiction. This has been the major limiting factor in the Australian Government’s adoption of cloud services as, until just recently, the major providers didn’t have an Australian presence. However even that might not suffice soon as the US government is attempting to break the idea of data sovereignty by requiring companies to disclose data that’s not within their jurisdiction.


This issue has arisen out of a long running court case that the US government has had against Microsoft. Essentially authorities in the USA want access to information that is stored on Microsoft servers in Dublin, Ireland. Their argument is that since Microsoft is in control of the servers they’re on the hook to provide the data. Microsoft’s argument has been that the US government should make that request from authorities within that jurisdiction. Indeed senior legal counsel from the Irish Supreme Court has said that such a request could be made under the Mutual Legal Assistance Treaty. This hasn’t satisfied the US authorities who believe that since the company is based in the USA all the data they control should be made available to them under their legal jurisdiction.

Putting aside the privacy concerns for the moment (and believe me there are many) if the US courts compel Microsoft to provide data from outside their jurisdiction then the notion of data sovereignty on any cloud service becomes null and void. No longer will anyone be able to assume that their data is subject to the laws of the country it resides in which raises a whole host of legal issues. Do companies that make use of locally provided but not locally owned services need to comply with US data retention laws like SOX? Are these requests for data going to be held to the same level of evidence requirements that other countries have? What’s stopping the US government from compelling US based companies from requesting other government’s data on these services? I could go on but it all comes down to the issue of the US government completely overstepping its jurisdiction.

For someone like me, who works primarily in the large government IT space, the attack feels even more personal. I’ve been a champion of cloud services for years and it’s only been recently that I’ve been able to make use of the public cloud with my clients. Should the US government continue with (and win) this case the ramifications will be instantaneous: all the government services running on cloud services will be in-housed as soon as possible. That’s not to mention the potential effects it could have on how international companies like mine will interact with government. Suddenly we wouldn’t be able to work with any client related data except when we’re on site, a tremendous blow to the way we do business.

The US government needs to realise just how damaging something like this could be both to their reputation internationally and the business that US based companies do elsewhere. Data sovereignty laws exist for a reason and breaking them just because your law enforcement agency doesn’t want to go through the proper channels isn’t a good enough excuse. If they continue down this path the IT industry will suffer immensely as a result and for nothing more than some saved paperwork and inflated egos.

Grow up, USA. Seriously.

3D Printed Model Jet Engine Demonstrates Reverse Thrust.

Have you ever wondered how planes manage to slow down so fast? It’s not that they have amazing brakes, although they do have some of the most impressive disc brakes you’ll ever see, no most of the work is done by the very thing that launches them into the sky: the engines. The way they achieve this is called thrust reversal and, as the name would imply, it redirects the thrust that the engine is generating in the opposite direction, slowing the craft down rather than accelerating it. The way modern aircraft achieve this is wide and varied but one of the most common ways is demonstrated perfectly with this amazing 3D printed scale model:

The engine that the model is based off of is a General Electric GEnx-1B, an engine that’s found in the revamped Boeing 747-8 as well as Boeing’s new flagship plane the 787. Whilst this model lacks the complicated turbofan internals that its bigger brothers have (replaced by a much simpler electric motor) the rest of it is to specification, including the noise reducing chevrons at the rear and, most importantly, the thrust reversal mechanism. What’s most impressive to me is that the whole thing was printed on your run of the mill extruder based 3D printer. If you’re interested in more details about the engine itself there’s an incredible amount of detail over in the forum where the creator first posted it.

As you can see from the video when the nacelle (the jet engine’s cover) slides back a series of fins pop up, blocking the fan’s output from exiting out of the rear of the engine. At the same time a void opens up allowing the thrust to exit out towards the front of the engine. This essentially changes the engine from pulling the craft through the air to pushing back against it, reducing the aircraft’s speed. For all modern aircraft, even ones that use a turboprop rather than a fan, this is how they reduce their speed once they’ve touched down.

Many of us have likely seen jet engines doing exactly that but the view that this model gives us of the engine’s internals is just spectacular. It’s one of those things that you don’t often think about when you’re flying but without systems like these there’s no way we’d be flying craft as big as the ones we have today.


Windows 10: Much The Same, and That’s Just Fine.

New Windows releases bring with them a bevy of new features, use cases and controversy. Indeed I can think back to every new Windows release dating back to Windows 95 and there was always something that set off a furor, whether it was UI changes or compatibility issues. For us technical folk though a new version of Windows brings with it opportunity, to experiment with the latest tech and dream about where we’ll take it. For the last month I’ve been using Windows 10 on my home machines and, honestly, whilst it feels much like its Windows 8.1 predecessor I don’t think that’s entirely a bad thing.


Visually Windows 10 is a big departure from its 8 and 8.1 predecessors as, for any non-tablet device, the full screen metro app tray is gone, replaced with a more familiar start menu. The full screen option is still there however, hiding in the notifications area under the guise of Tablet Mode, and for transformer or tablet style devices this will be the default option. The flat aesthetic has been taken even further again with all the iconography being reworked, ironing out almost any 3D element. You’re also not allowed to change the login screen’s laser window background without the aid of a resource hacker, likely due to the extreme amount of effort that went into creating the image.

For most, especially those who didn’t jump in the Windows 8 bandwagon, the navigation of the start menu will familiar although I must admit after the years I’ve spent with its predecessor it’s taken some getting used to. Whilst the charms menu might have disappeared the essence of it appears throughout Windows 10, mostly in the form of settings panels like Network Settings. For the most part they do make the routine tasks easier, like selecting a wifi network, however once things get complicated (like if you have say 2 wireless adapters) then you’re going to have to root around a little bit to find what you’re looking for. It is a slightly better system than what Windows 8 had, however.

To give myself the full Windows 10 experience I installed it on 2 different machines in 2 different ways. The first was a clean install on the laptop you see above (my trusty ASUS Zenbook UX32V) and that went along without a hitch. For those familiar with the Windows 8 style installer there’s not much to write home about here as it’s near identical to the previous installers. The second install was an upgrade on my main machine as, funnily enough, I had it on good word that the upgrade process was actually quite useable. As it turns out it is as pretty much everything came across without a hitch. The only hiccup came from my audio drivers not working correctly (seemed to default to digital out and wouldn’t let me change it) however a reinstall of the latest drivers fixed everything.

In terms of features there’s really not much in the way of things I’d consider “must haves” however that’s likely because I’ve been using many of those features since Windows 8 was first released. There are some interesting little additions however like the games features that allow you to stream, record and capture screenshots for all DirectX games (something which Windows will remind you about when you start them up). Microsoft Edge is also astonishingly fast and quite useable however since it’s so new the lack of extensions for it have precluded me from using it extensively. Interestingly Internet Explorer still makes an appearance in Windows 10, obviously for those corporate applications that continue to require it.

Under the hood there’s a bevy of changes (which I won’t bore you with here) however the most interesting thing about them is the way Windows 10 is structured for improvements going forward. You see Windows 10 is currently slated to be the last major release of Windows ever but this doesn’t mean that it will remain stagnant. Instead new features will be released incrementally on a much more frequent basis. Indeed the roadmaps I’ve seen show that there are several major releases planned in the not too distant future and indeed if you want a peek at the them all you need to do is sign up for the Windows Insider program. Such a strategy could reap a lot of benefits, especially for organisations seeking to avoid the heartache of Windows version upgrades in the future.

All in all Windows 10 is pretty much what I expected it to be. It has the best parts of Windows 7 and 8 and mashed together into a cohesive whole that should appease the majority of Windows users. Sure there are some things that some won’t like, the privacy settings being chief among them, however they’re at least solvable issues rather than showstoppers like Vista’s compatibility or 8’s metro interface. Whether Microsoft’s strategy of no more major versions ever is tenable or not is something we’ll have to see over the coming years but at the very least they’ve got a strong base with which to build from.


An Artificial Brain in Your Pocket.

Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.


The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.

But why, I hear you asking, would you want a computerized brain in your pocket?

IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.

The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.

Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.


Simple Code Change Would Defeat RollJam, The $30 Device That Can Unlock Almost Any Car.

There are many things that we trust implicitly, often by the simple idea that since it’s everywhere or that many people use it then it must be safe. It’s hard not to do this as few of us possess the knowledge and understanding of all the systems we use in order to establish explicit trust. Indeed it’s often the case that these systems are considered safe until a flaw is exposed in them, then leading to a break in trust which then must be reestablished. One such system, the keyless entry fobs many of us have with our cars, has just proven itself to be vulnerable to attack but it all could have been avoided with an incredibly simple change to the underlying code.

1.jpgKeyless entry on your car relies on a fairly simple system for its operation. What happens when you press the unlock button is that a code is wirelessly transmitted from your fob to your car, unlocking the doors. Back in the early days the code that these fobs sent was unique and fixed which, whilst preventing one person’s fob from opening your car, meant it was incredibly simple to copy the code. This was then changed to the current standard of a “rolling code” which changes every time you press the key. This made straight up duplication impossible, as the same code is never used twice, however it opened it up to another, more subtle, attack.

Whilst the codes changed every time the one thing that the manufacturers of these systems didn’t do was invalidate codes that had already been used. This was primarily due to convenience as there’s every chance your fob got pressed when you weren’t in range of the car, burning a code. However the problem with this system is that should someone capture that code they could then use it to unlock your car at a later date. Indeed there had been many proof of concept systems developed to do this however the latest one, a $30 gadget called RollJam, takes the process to a whole new level.

The device consists of a receiver, transmitter and signal jammer. When the device is activated it will actively jam any wireless key entry signal, stopping it from reaching the car. Then, when a user presses their key fob to unlock their doors, it captures the code that was sent. This stops the doors from unlocking however nearly all users will simply press it again, sending another code. RollJam then transmits the first code to the car, unlocking the doors, whilst capturing the other code. The user can now enter their car and RollJam now has a code stored which it can use to gain access at a later date. The device appears to work on most major brands of vehicles with only a few of the more recent models being immune to the attack.

What amazes me is that such an attack could’ve easily been prevented by including an incremental counter in the key fob. Then when transmitting a code the fob also sends with it the current count, meaning that any code sent with a previous number is void. This can also be defeated by making the codes expire after some time which, I admit, is a little more difficult to implement but surely not beyond the capability of companies with billions of dollars in annual revenue. To their credit some companies have made headway in preventing such an attack however that won’t mean a lot for all the cars that are currently out there with systems that are susceptible to such an attack.

In the end it comes down to a combination of convenience and bottom dollar programming that led such a pervasive system being as broken as it is. Unfortunately unlike IT systems, which can be patched against such vulnerabilities, these keyless entry systems will likely remain vulnerable as long as they’re in use. Hopefully current car manufacturers take note of this issue and work to address it in future models as, honestly, it seems like one of the most rookie mistakes ever.


Lexus’ Hoverboard is Deceptive Wankery.

There are some technological ideas that captivate the public consciousness, our want for them to exist outstripping any ideas of practicality or usability. Chief among such ideas is the flying car, the seemingly amazing idea which, should it ever become mainstream, poses far more issues than it could ever solve. Still there have been numerous companies who have worked towards making that idea a reality with nearly all of them meeting the same fate. A close second (or third, if you’re more a jetpack fan) is the hoverboard, a device that replicates the functionality of a skateboard without the wheels. Our collective desire for something like that is what results in videos like the following and, honestly, they give me the shits:

Anyone who’s followed technology like this knows that a hoverboard, one that can glide over any surface, simply isn’t possible with our current understanding of physics and level of technological advancement. However if you grab a couple powerful electromagnets and put them over a metallic surface you can make yourself a decent simulacrum of what a hoverboard might be, it just can’t leave that surface. Indeed there’s been a few of these kinds of prototypes in the past and, whilst they’re cool and everything, they’re not much more than a demonstration of what a magnet can do.

This is where Lexus comes in with their utterly deceptive bullshit.

Just over a month ago Lexus put out this site showing a sleek looking board that was billowing smoke out its sides, serenely hovering a few inches above the ground. The media went ballistic, seemingly forgetting about what would be required to make something of this nature and the several implementations that came before it. Worst still the demonstration videos appeared to show the hoverboard working on regular surfaces, just like the ones in the movies that captured everyone’s imaginations. Like all good publicity stunts however the reality is far from what the pictures might tell and I lay the blame squarely at Lexus for being coy about the details.

You see the Lexus hoverboard is no different to the others that came before it, it still uses magnets and requires a special surface in order to work. Lexus built that entire set just to demonstrate the hoverboard and was mum about the details because they knew no one would care if they knew the truth. Instead they kept everything secret, making many people believe that they had created something new when in reality they hadn’t, all they did was put a larger marketing budget behind it.

Maybe I’ve just become an old cynic who hates fun but, honestly, I really got the shits with Lexus and the wider public’s reaction to this malarkey. Sure it looks cool, what with the slick design and mist cascading over the sides, but that’s about where it ends. Everything past that is Lexus engaging in deceptive marketing tactics to make us think it’s more than it is rather than being straight up about what they did. Of course they likely don’t care about what a ranty blogger on a dark corner of the Internet thinks, especially since he’s mentioned their brand name 10 times in one post, but I felt the need to say my peace, even if it wouldn’t change anything.