Technology

Samsung 850 Pro V-NAND SSD

Samsung’s V-NAND Has Arrived, and It’s Awesome.

When people ask me what one component on their PC they should upgrade my answer is always the same: get yourself a SSD. It’s not so much the raw performance characteristics that make the upgrade worth it, more all those things that many people hate about computers seem to melt away when you have a SSD behind it. All your applications load near instantly, your operating system feels more responsive and those random long lock ups where your hard drive seems to churn over for ages simply disappears. However the one drawback is their size and cost, being an order of magnitude above the good old spinning rust. Last year Samsung announced their plans to change that with V-NAND and today they deliver on that promise.

Samsung 850 Pro V-NAND SSD

The Samsung 850 Pro is the first consumer drive to be released with V-NAND technology and is available in sizes up to 1TB. The initial promise of 128Gbit per chip has unfortunately fallen a little short of its mark with this current production version only delivering around 86Gbit per chip. This is probably due to economical reasons as the new chips under the hood of this SSD are smaller than the first prototypes which helps to increase the yield per wafer. Interestingly enough these chips are being produced on an older lithography process, 30nm instead of the current standard 20nm for most NAND chips. That might sound like a step back, and indeed it would be for most hardware, however the performance of the drive is pretty phenomenal, meaning that V-NAND is going to get even better with time.

Looking at the performance reviews the Samsung 850 Pro seems to be a top contender, if not the best, in pretty much all of the categories. In the world of SSDs having consistently high performance like this across a lot of categories is very unusual as typically a drive manufacturer will tune performance to a certain profile. Some favour random reads, others sustained write performance, but the Samsung 850 Pro seems to do pretty much all of them without breaking a sweat. However what really impressed me about the drive wasn’t so much the raw numbers, it was how the drive performed over time, even without the use of TRIM.

samsung 850 pro 512gb - hdtach-3-

SSDs naturally degrade in performance over time, not due to the components wearing out but due to the nature of how they read and write data. Essentially it comes down to blocks needing to be checked to see if they’re free or not before they can be written to, a rather costly process. A new drive has all blank space which means these checks don’t need to be done but over time they’ll get into unknown states due to all the writing and rewriting. The TRIM command tells SSDs that certain blocks have been freed up, allowing the drive to flag them as unused, recovering some of the performance. The graph above shows what happens when the new Samsung 850 Pro reaches that performance degradation point even without the use of TRIM. If you compare that to other SSDs this kind of consistent performance almost looks like witchcraft but it’s just the V-NAND technology showing one of its many benefits.

Indeed Samsung is so confident in these new drives it’s giving all of them a 10 year warranty, something you can’t find even on good old spinning rust drives anymore. I’ll be honest when I first read about V-NAND I had a feeling that the first drives would likely be failure ridden write offs, like most new technologies are. However this new drive from Samsung appears to be the evolutionary step that all SSDs need to take as this first iteration device is just walking all over the competition. I was already sold on a Samsung SSD for my next PC build but I think an 850 Pro just made the top of my list.

Now if only those G-SYNC monitors could come out already, then I’d be set to build my next gen gaming PC.

Google Cardboard

Google’s Cardboard: VR For The Masses.

I can remember my first encounter with virtual reality way back in the 90s. It was a curiosity more than anything else, something that was available at this one arcade/pizza place in the middle of town. You’d go in and there it would be, two giant platforms containing people with their heads strapped into oversized head gear. On the screens behind them you could see what they were seeing, a crude polygonal world inhabited by the other player and a pterodactyl. I didn’t really think much of it at the time, mostly since I couldn’t play it anywhere but there (and that was an hour drive away) but as I grew older I always wondered what had become of that technology. Today VR is on the cusp of becoming mainstream and it looks like Google wants to thrust it into the limelight.

Google Cardboard

Meet Google Cardboard, the ultra low cost virtual reality headset that Google gave out to every attendee at I/O this year. It’s an incredibly simple idea, using your smartphone’s screen and to send different images to your eyes. Indeed if you were so inclined a similar system could be used to turn any screen into a VR headset, although the lenses would need to be crafted for the right dimensions. With that in mind the range of handsets that Google Cardboard supports is a little limited, mostly to Google Nexus handsets and some of their closely related cousins, but I’m sure that future incarnations that support a wide range of devices won’t be too far off. Indeed if the idea has piqued your interest enough you can get an unofficial version of it for the low cost of $25, a bargain if you’re looking to dabble with VR.

Compared to the original OculusVR specs most smartphones are more than capable of driving Google Cardboard with an acceptable level of performance. My current phone, the Sony Xperia Z, has a full 1080p resolution and enough grunt to run some pretty decent 3D applications. That combined with the bevy of sensors that are in most modern smartphones make Google Cardboard a pretty brilliant little platform for testing out what you can do with VR. Of course that also means the experience you can get with this will vary wildly depending on what handset you have but for those looking for a cheap platform to validate ideas on it’s hard to argue against it.

Of course this begs the question as to what Google’s larger plan is for introducing this concept to the world. Ever since the breakaway success that was the OculusVR it’s been obvious that there’s consumer demand for VR and it only seems to be increasing as time goes on. However most applications are contained solely within the games industry with only a few interesting experiments (like Living with Lag) breaking outside that mould. There’s a ton of augmented reality applications on Android which could potentially benefit from widespread adoption of something like Cardboard, however beyond that I’m not so sure.

I think it’s probably a gamble on Google’s part as history has proven that throwing out a concept to the masses is a great way to root out innovative ideas. Google might not have any solid plans for developing VR of this nature themselves but the community that arises around the idea could prove a fruitful place for applications that no one has thought of before. I had already committed myself to a retail version of an Oculus when it came out however so whilst Cardboard might be a curiosity my heart is unfortunately promised to another.

Facebook Headquarters

Facebook is Being Creepy Again, But They Didn’t Have to be.

In the now decade long history of Facebook we’ve had numerous scandals around the ideas of privacy and what Facebook should and should not be doing with the data they have on us. For the most part I’ve tended to side with Facebook as whilst I share everyone’s concerns use of the platform is voluntary in nature and should you highly object to what they’re doing you’re free to not use them. The fact is that any service provided to you free of charge needs to make revenue somewhere and for Facebook that comes from your data. However this doesn’t seem to stop people from being outraged at something Facebook does with almost clockwork regularity, the most recent of which was tinkering with people’s feeds to see if emotions could spread like the plague.

Facebook HeadquartersThe results are interesting as they show that emotions can spread through social networks without the need for direct interaction, it can happen by just reading status updates. The experimenters sought to verify this by manipulating the news feeds of some 689,000 Facebook users to skew the emotional content in one direction and then saw how the user’s emotional state fared further down the line. The results confirmed their initial hypothesis showing that emotions expressed on Facebook can spread to others. Whilst it’s not going to cause a pandemic of ecstasy or sudden whirlwind of depression cases worldwide the evidence is there to suggest that your friend’s sentiment on Facebook does influence your own emotional state.

Whilst it’s always nice to get data that you can draw causal links from (like with this experiment) I do wonder why they bothered to do this when they could’ve done much more in depth analysis on a much larger subset of the data. They could have just as easily taken a much larger data set, classified it in the same way and then done the required analysis. This somewhat sneaks around the rather contentious issue of informed consent when it comes to experiments like this as there’s no indication that Facebook approached these individuals before including them in the experiment.

Indeed that’s probably the only issue I have with Facebook doing this as whilst the data they have is theirs to do with as they see fit (within the guidelines of privacy regulations) attempting to alter people’s emotional state is a little too far. The people behind the study have came out and said that the real impact wasn’t that great and it was all done in aid of making their product better something which I’m sure is of little comfort to those who object to the experiment in the first place. Whilst the argument can be made that Facebook already manipulates users feeds (since you don’t see everything that your friends post anymore) doing so for site usability/user engagement is one thing, performing experiments on them without consent is another.

If Facebook wants to continue these kinds of experiments then they should really start taking steps to make sure that its user base is aware of what might be happening to them. Whilst I’m sure people would still take issue to Facebook doing widespread analysis on user’s emotional state it would be a far cry from what they did with this experiment, one that would likely not run afoul of established experimental standards. The researchers have said they’ll take the reaction to these results under advisement which hopefully means that they might be more respectful of their user’s data in the future. However since we’re going on 10 years of Facebook doing things like this I wouldn’t hold my breath for immediate change.

 

Screenshot_2014-06-25-10-36-44

Recycling Electromagnetic Energy? iFind, Surely You Jest.

If you’re reading this article, which is only available through the Internet, then you’re basking in a tsunami of electromagnetic radiation. Don’t worry though, the vast majority of these waves are so low power that they don’t make it through the first layer of your skin before dissipating harmlessly. Still they do carry power, enough so that this article can worm its way from the server all the way to the device that you’re reading it on. Considering just how pervasive wireless signals are in our modern lives it then follows that there’s a potential source of energy there, one that’s essentially free and nigh on omnipresent. Whilst this is true, to some extent, actually harvesting a useful amount of it is a best impractical but that hasn’t stopped people from trying.

Screenshot_2014-06-25-10-36-44

If you’re a longtime fan of Mythbusters like myself you’ll likely remember the episode they did on Free Energy back in 2004. In that episode they tested a myriad of devices to generate electricity, one of them being a radio wave extractor that managed to power half of a wristwatch. In an unaired segment they even rigged up a large coil of wire and placed it next to a high voltage power line and were able to generate a whopping 8mV. The result of all this testing was to show that, whilst there is some power available for harvesting, it’s not a usable quantity by any stretch of the imagination.

So you can imagine my surprise when a product like iFind makes claims like “battery free” and “never needs recharging” based around the concept of harvesting energy from the air.

The fundamental functionality of the iFind isn’t anything new, it’s just yet another Bluetooth tag system so you don’t lose whatever you attach the tag to. It’s claim to fame, and one that’s earned it a rather ridiculous half a million dollars, is that it doesn’t have a battery (which it does, unless you want to get into a semantic argument about what “battery” actually means) and that it charges off the electromagnetic waves around you. They’ve even gone as far to provide some technical documentation that shows the power generated from various signals. Suffice to say I think their idea is unworkable at best and, at worst, outright fraud.

The graphs they show in this comment would seem to indicate that it’s capable of charging even under very weak signal conditions, all the way down to -6dBm. That sounds great in principle until you take in account what a typical charging scenario for a device like this would be, like the “ideal” one that they talk about in some of their literature: a strong wifi signal. The graph shown above is the signal strength of my home wifi connection (an ASUS RT-N66U for reference) with the peak readings being from when I had my phone right next to the antennas. That gives a peak power output of some -22dBM, which sounds fine right? Well since those power ratings are logarithmic in nature the amount of power output is about 200 times weaker which puts the actual charge time at about 1000 days. If you had a focused RF source you could probably provide it with enough power to charge quickly but I doubt anyone has them in their house.

There’s also the issue of what kind of power source they have as the size precludes it from being anything hefty and they’re just referring to it as a “power bank”. Non-rechargeable batteries that fit within that form factor are usually on the order of a couple hundred milliamps with rechargeable variants having a much smaller capacity. Similar devices like Tile, which includes a non-rechargeable non-replaceable battery, lasts about a year before it dies which suggests a minimum power drain of at least a couple mAh per day. Considering iFind is smaller and rechargeable I wouldn’t expect it to last more than a couple weeks before giving it up, Of course since there’s no specifications on either of them it’s hard to judge but the laws of physics don’t differ between products.

However I will stop short of calling iFind a scam, more I think it’s a completely misguided exercise that will never deliver on its promises. They’ve probably designed something that does work under their lab circumstances but the performance will just not hold up in the real world. There’s a lot of questions that have been asked of them that are still unanswered which would go a long way to assuring people that what they’re making isn’t vaporware. Until they’re forthcoming with more information however I’d steer clear of giving them your money as it’s highly unlikely that the final product will perform as advertised.

What it Takes to Make Wheels that can Travel at 1600km/h.

One of my favourite shows that I found out about far too late into my adult life was How It’s Made. The premise of the show is simple: they take you into the manufacturing process behind many common products, showing you how they go from their raw materials into the products we all know. Whilst I’d probably recommend skipping the episodes which show you how some of your favourite food is made (I think that’s called the Sausage Principle) the insight into how some things are made can be incredibly fascinating. However whilst everyday products can be interesting they pale in comparison to something like the following video which shows how solid aluminium wheels are created for an upcoming jet car:

YouTube Preview Image

I think what gets me most about this video is the amazing level of precision that they’re able to obtain using massive tools, something which usually doesn’t go together. The press seems to be able to move in very small increments and can do so at speeds that just seem to be out of this world. The gripper also seems to have a pretty high level of fidelity about it, being able to pick up an extremely malleable piece of heated aluminium without structurally deforming it. That’s only half the equation though as the operators of these machines are obviously highly skilled in their operation, being able to guide them with incredible accuracy.

In fact the whole YouTube channel dedicated to the Bloodhound SSC car is filled with engineering marvels like this from showing off the construction of the monocoque and the attached components all the way to the interior and the software they’ll be using for it. If the above video had you tingling with excitement (well, I was, but I’m strange) then I highly recommend checking them out.

Watch_Dogs2014-6-18-12-16-6

Looks Like Ubisoft Owes Us Some Answers.

In my recent review of Ubisoft Montreal’s latest game, Watch_Dogs, I gave the developers the benefit of the doubt when it came to the graphics issues that many people had raised. Demos are often scripted and sculpted in such a way as to show a game in the best light possible and so the delivered product most often doesn’t line up with people’s expectations. So since Watch_Dogs wasn’t an unplayable monstrosity I chalked it up to the hype leading us all astray and Ubisoft pulling the typical demo shenanigans. As it turns out though there’s a way to make Watch_Dogs look as good as it did in the demos and all that’s required is adding 2 files to a directory.

This mod came to everyone’s attention yesterday with dozens of screenshots plastering all the major games news outlets. A modder called TheWorse on Guru3D became obsessed with diving into the Watch_Dog code and eventually managed to unpack many of the game’s core files. After that he managed to enable many of the effects that had been present in the original E3 demo of Watch_Dogs, along with tweaking a number of other settings to great effect. The result speaks for itself (as my before and after screenshots above can attest to) with the game looking quite a lot better than it did on my first play through. The thing with this mod is that unlike other graphical enhancements like ENB, which gives us all those pretty Skyrim screenshots, this mod isn’t adding anything to the rendering pipeline, it’s just enabling functionality that’s already there. Indeed this is most strongly indicated by the mod’s size, a paltry 45KB in size.

So first things first: I was wrong. Whilst the demo at E3 was likely running on a machine far better than many PC gamers have access to this mod shows that Watch_Dogs is capable of looking a lot better than it currently is. My current PC is approaching some 3 years old now, almost ancient in gaming PC years, and it was able to run the mod with ultra graphics settings, something I wasn’t able to do previously. It could probably use a little tweaking to get the framerate a bit higher but honestly that’s just my preference for higher frame rates more than anything. So with this in mind the question then turns to why Watch_Dogs shipped on PC in the state it did and who was ultimately responsible for removing the features that had so many in love with the E3 demo.

The conspiracy theorist in me wants to join the chorus of people saying that Watch_Dogs was intentionally crippled on PC in order to make it look more comparable to its console brethren. Whilst I can’t deny that it’s a possibility I simply have no evidence apart from the features being in the game files themselves. This is where Ubisoft’s response to the controversy would shed some light on the issue as whilst they’re not likely to say “Yep, we did it because Watch_Dogs looks horrendous on consoles when compared to PC” they might at least give us some insight into why these particular features were disabled. Unfortunately they’re still keeping their lips sealed on this one so unfortunately all we have to go on now is rampant speculation, something I’m not entirely comfortable with engaging in.

Regardless of the reasons though it does feel a bit disingenuous to be shown one product and then be sold another. Most of the traditional reasons for disabled features, like performance or stability issues, just don’t seem to be present with this mod, which lends credence to the idea that they were disabled on purpose after they were fully developed. Until Ubisoft starts talking about this though we don’t have much more to go on and since this can be enabled so easily I don’t think many gamers are going to care too much what they have to say anyway. Still I’d very much like to know the story behind it as looks a lot more like a political/financial issue rather than a purely technical one.

HP The Machine High Level Architecture

HP’s “The Machine”: You’d Better Deliver on This, HP.

Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.

According HP their new concept, The Machine, is the answer to that question.

HP The Machine High Level Architecture

 

For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.

As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.

You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.

Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.

Eugene Goostman Turing Test Chatbot

The Turing Test is Dead! Long Live the Turing Test!

Back in the days when ICQ was the default messaging platform for us teenagers I can remember becoming rather familiar with all manner of chatbots that’d grace my presence. Most of the time they were programmed to get you to go to a website, sometimes legitimate although almost always some kind of scam, but every so often you’d get one that just seemed to be an experiment to see how real they could make one. It wouldn’t take long to figure out if there was a real person on the end or not though as their variety of responses were limited and they would often answer questions with more questions, a telltale sign of an expert system. Since those heydays my contact with chatbots has been limited mostly to those examples that have done well in Turing Test competitions around the world. Even those however have proved to be less than stellar, showing that this field still has a long way to go.

Eugene Goostman Turing Test Chatbot

However news has been making the rounds that a plucky little chatbot named Eugene Goostman has passed the Turing Test for the first time. Now the definition of the Turing Test itself is somewhat nebulous, being only that a human judge isn’t able to tell the difference between computer generated responses from that of a human, and if you take that literally it’s already been passed several times over. Many, including myself, take it to mean a little more in that a chatbot would have to be able to fool the majority of people into thinking it was human before it could be accepted as having passed the test. In that regard Eugene here hasn’t really passed the test at all, although I do admit it’s creator’s strategy was a good attempt at poking holes in the test’s vague definition.

You see Eugene isn’t your typical generic chatbot, instead he’s actually been programmed to be a 13 year old Ukrainian boy to whom English is a second language. It’s clever because you can then limit the problem space of what he can answer significantly as you wouldn’t expect a 13 year old to know a lot of things and when you’re questioning a non-native speaker the verbal tools you have available to you are again limited. At the same time however this is simply an artificial way of making the chatbot seem more human than it actually is. Indeed this is probably the biggest criticism that has been levelled at Eugene since its rise to fame as it appeared to dodge more responses than it could give answers to, a telltale sign that you’re speaking to an AI.

So as you can probably tell by the tone of my writing I don’t think that Eugene qualifies as having passed the Turing Test as the criteria used (33% of the judges were fooled) weren’t sufficient as otherwise several other bots would have claimed that title previously. I wholly admit this is due in part to the nebulous nature of how Turing first posited the test, whereby the interpretation of “passed” varies wildly between individuals, but my sentiment does seem to echo with the wider AI community. I think the ideas behind generating the Eugene chatbot are interesting as it shows how the problem space can be narrowed down but if the chance of it fooling someone is less than random then, in my mind, that does not qualify as a pass.

I don’t expect that the Turing Test will be past to a majority of the AI community’s satisfaction for some time to come as it requires duplicating so many human functions that we just haven’t been able to translate into code yet. For me the easiest way to tell a bot from a human is to teach it something esoteric and have it repeat its own interpretation back to me, something which no chatbot has been able to do to date. Indeed just that simple example, being able to teach it something and have it interpret it based on its own knowledge base, entails leaps forward in AI that just don’t exist in a general form yet. I’m not saying that it will never happen, far from it, but the system that first truly passes the Turing Test is yet to come and is likely many years away from reality.

The Eerie Beauty of the Strandbeests.

I remember attending an exhibition about Leonardo Da Vinci a couple years ago and I was astounded by the complexity of some of the machines he created. It wasn’t just that he’d figured out these things where no one else had, more it was some of the things that he designed didn’t seem possible to me, at least with the technology he had available to him at the time. Ever since then I’ve had something of a fascination with mechanical structures, marvelling at creations that seem like they should be impossible. My favourite example of this is Theo Jansen’s Strandbeests, a new form of life that he has been striving to create for the better part of 25 years.

YouTube Preview Image

All of his designs are essentially tensegrity structures (I.E. all parts of the structure are under constant tension) arranged in such a way that when an outside force, in this case the wind at a beach, acts on them they’re able to walk. His initial designs only functioned when the wind was blowing however further designs, many of which you can see in the video, are able to store wind energy and then use it later through some rather clever mechanical engineering. Unfortunately I couldn’t find the best video which has Theo explaining how they work as that one also shows another Strandbeest he created that would avoid walking itself into the ocean (something which I’m still not sure I understand how it works completely).

The idea of creating a new form of life, even if it doesn’t meet the 7 rules for biological life, is a pretty exciting idea and one that’s found an unlikely form of replication: 3D printing. After many people made their own versions of his Strandbeests (I even printed a simple one off, although it broke multiple times during assembly) Theo has made the designs available through Shapeways, essentially giving the Strandbeests a way to procreate. Sure it’s not as elegant as what us biological entities have but the idea does have a cool sci fi bent to it that tickles me in all the right places.

Taken to its logical extreme I guess a Reprap that printed Strandbeests that assembled other Repraps would be the ultimate end goal, although that’s both exciting and horrifying at the same time.

Solar Roadways Prototype

The Rights and Wrongs of Thunderf00t’s Solar Roadways Takedown.

Last week I wrote a post about the Solar Roadways Indiegogo campaign that had been sweeping the media. In it I did a lot of back of the envelope math to come up with some figures that made them seem reasonable based on my assumptions which lead me to the conclusion that they looked feasible with the caveat that I was working with very little information. Still I did a decent amount of research into some of the various components to make sure I was in the same order of magnitude. You’d then think that the venerable Thunderf00t’s takedown video on this project would put me at odds with him but, for the most part, I agree with him although there were a couple of glaring oversights which I feel require some attention.

Solar Roadways PrototypeFIrst off let me start off with the stuff that I agree with. He’s completely correct in the assertion that the tile construction isn’t optimal for road usage and the issues that arise from it are non-trivial. The idea of using LEDs sounds great in principle but as he points out they’re nigh on invisible in broad daylight which would make the road appear unmarked, a worrying prospect.Transporting the energy generated by these panels will also be quite challenging as the current produced by your typical solar panel isn’t conducive to being put directly on the grid. The properties of the road also require further validation as whilst the demonstrations shown by Solar Roadways say they’re up to standard there’s little proof to back up these claims so far. Finally the idea of melting snow seemed plausible to me on first look but I had not run any numbers against that claim so I’d defer to Thunderf00t’s analysis on this one.

However his claims about the glass are off the mark in many cases. Firstly it’s completely possible to make clear glass from recycled colour glass, usually through the use of additives like erbium oxide or manganese oxide. I agree on his point that it’s unlikely that they have the facilities available to them to do this right now however it’s not out of the realm of possibility. Thunderf00t also makes the mistake of taking a single item price of a piece of tempered glass off eBay and then uses that to extrapolate to the astronomical cost for covering all of the roads in the USA with it. In fact tempered glass produced at volume is actually rather cheap, about $7.50 per square meter, when you check out some large scale manufacturers. This makes the cost look far more reasonable than the $20 trillion that was originally quoted.

The same thing can be said for the solar panels, PCBs, LEDs and microcontrollers that are underneath them. Solar panels can be had for the low low price of $0.53 per watt (a grand total of about $30 per panel) and RGB LEDs for about $0.08/each (could have 1000 in each panel for $80). Indeed the cost of the construction of the panels themselves are likely to not be that expensive, especially at volume, however the preparation for the surface and the conduit channel are likely to be more expensive than your traditional road. This is because you’d likely have to do the same amount of site prep work for both of them (you can’t just lay these tiles into dirt) and then the panels themselves would be an incidental cost on top.

Tempered glass is also a lot harder than your regular type of glass, something which Thunderf00t missed in his analysis. It’s true that regular glass has a Mohs hardness of around 5 but tempered glass can be up to 7 or higher, depending on the additives used. Traditional road surfaces have a very similar hardness to that of tempered glass meaning they’d stuffer no more wear than a traditional road surface would. Whether this would mean a degradation in optical quality, and therefore solar efficiency, over time is something I can’t really comment on but the argument of sand and other things wearing away the surface doesn’t really hold up.

All this being said though Thunderf00t hits on the big issues that Solar Roadways has to face in order for their idea to become a reality. Whilst I’m still erring on the side of it being possible I do admit that there are numerous gaps in our knowledge of the product, many of which could quickly lead to it being completely infeasible. Still there’s potential for this idea to work in many areas, like the vast highways throughout Australia, even if some of the more outlandish ideas like melting the snow on them might not work out. It will be interesting to see how Solar Roadways reacts to this as there are numerous questions which can’t go unanswered.