Technology

14nmCosts

Intel Keeps Moore’s Law Alive With 14nm Fabrication.

The popular interpretation of Moore’s Law is that computing power, namely of the CPU, doubles every two years or so. This is then extended to pretty much all aspects of computing such as storage, network transfer speeds and so on. Whilst this interpretation has held up reasonably well in the past 40+ years since the law has coined it’s actually not completely accurate as Moore was actually referring to the number of components that could be integrated into a single package for a minimum cost. Thus the real driver behind Moore’s law isn’t performance, per se, it’s the cost at which we can provide said integrated package. Keeping on track with this law hasn’t been easy but innovations like Intel’s new 14nm process are what have been keeping us on track.

14nmCosts

CPUs are created through a process called Photolithography whereby a substrate, typically a silicon wafer, has the transistors etched onto it through a process not unlike developing a photo. The defining characteristic of this process is the minimum size of a feature that the process can etch on the wafer which is usually expressed in terms of nanometers. It was long thought that 22nm would be the limit for semiconductor manufacturing as this process was approaching the physical limitations of the substrates used. However Intel, and many other semiconductor manufacturers, have been developing processes that push past this and today Intel has released in depth information regarding their new 14nm process.

The improvements in the process are pretty much what you’d come to expect from a node improvement of this nature. A reduction in node size typically means that a CPU can be made with more transistors that performs better and uses less power than a similar CPU built on a larger sized node. This is most certainly the case with Intel’s new 14nm fabrication process and, interesting enough, they appear to be ahead of the curve so to speak, with the improvements in this process being slightly ahead of the trend. However the most important factor, at least in respect Moore’s Law, is that they’ve managed to keep reducing the cost per transistor.

One of the biggest cost drivers for CPUs is what’s called the yield of the wafer. Each of these wafers costs a certain amount of money and, depending on how big and complex your CPU is, you can only fit a certain number of them on there. However not all of those CPUs will turn out to be viable and the percentage of usable CPUs is what’s known as the wafer yield. Moving to a new node size typically means that your yield takes a dive which drives up the cost of the CPU significantly. The recently embargoed documents from Intel reveals however that the yield for the 14nm process is rapidly approaching that of the 22nm process which is considered to be Intel’s best yielding process to date. This, plus the increased transistor density that’s possible with the new manufacturing process, is what has led to the price per transistor dropping giving Moore’s law a little more breathing room for the next couple years.

This 14nm process is what will be powering Intel’s new Broadwell set of chips, the first of which is due out later this year. Migrating to this new manufacturing process hasn’t been without its difficulties which is what has led to Intel releasing only a subset of the Broadwell chips later this year, with the rest to come in 2015. Until we get our hands on some of the actual chips there’s no telling just how much of an improvement these will be over their Haswell predecessors but the die shrink alone should see some significant improvements. With the yields fast approaching those of its predecessors they’ll hopefully be quite reasonably priced too, for a new technology at least.

It just goes to show that Moore’s law is proving to be far more robust than anyone could have predicted. Exponential growth functions like that are notoriously unsustainable however it seems every time we come up against another wall that threatens to kill the law off another innovative way to deal with it comes around. Intel has long been at the forefront of keeping Moore’s law alive and it seems like they’ll continue to be its patron saint for a long time to come.

Google Plus

Winding Down Google+ is the Right Move, But Might Be Too Late.

When Google+ was first announced I counted myself among its fans. Primarily this was due to the interface which, unlike every other social media platform at the time, was clean and there was the possibility I could integrate all my social media in the one spot. However as time went on it became apparent that this wasn’t happening any time soon and the dearth of people actively using it meant that it just fell by the wayside. As other products got rolled into it I wasn’t particularly fussed, I wasn’t a big user of most of them in the first place, however I was keenly aware of the consternation from the wider user base. It seems that Google might have caught onto this and is looking to wind down the Google+ service.

Google Plus

Back in April the head of Google+, Vic Gundotra, announced that he was leaving the company. Whilst Google maintained that this would not impact on their strategy many sources reported that Google was abandoning its much loathed approach of integrating Google+ into everything and that decrease in focus likely meant a decrease in resources. Considering that no one else can come up for a good reason why Gundotra, a 7 year veteran of Google, would leave the company it does seem highly plausible that something is happening to Google+ and it isn’t good for his future there. The question in my mind then is whether or not winding down the service will restore the some of the goodwill lost in Google’s aggressive integration spree.

Rumours have it that Google+ Photos will be the first service to be let free from the iron grip of its parent social network. Considering that the Photos section of Google+ started out as the web storage part of their Picasa product it makes sense that this would be the first service to be spun out. How it will compete with other, already established offerings though is somewhat up in the air although they do have the benefit of already being tightly integrated with the Android ecosystem. If they’re unwinding that application then it makes you wonder if they’ll continue that trend to other services, like YouTube.

For the uninitiated the integration of YouTube and Google+ was met with huge amounts of resistance with numerous large channels openly protesting it. Whilst some aspects of the integration have been relaxed (like allowing you to use a pseudonym that isn’t your real name) the vast majority of features that many YouTubers relied on are simply gone, replaced with seemingly inferior Google+ alternatives. If Google+ is walking off into the sunset then they’d do well to bring back the older interface although I’m sure the stalwart opponents won’t be thanking Google if they do.

Honestly whilst I liked Google+ originally, and even made efforts to actively use the platform, it simply hasn’t had the required amount of buy in to justify Google throwing all of its eggs into that basket. Whilst I like some of the integration between the various Google+ services I completely understand why others don’t, especially if you’re a content creator on one of their platforms. Winding down the service might see a few cheers here or there but honestly the damage was already done and it’s up to Google to figure out how to win the users back in a post Google+ world.

The Internet Never Forgets

Sometimes The Internet Does Forget.

Last year I fucked up.

There’s really no other way to put it, I made the rookie mistake of not backing up everything before I started executing commands that could have some really bad consequences. I’d like to say it was hubris, thinking that my many years in the industry had made me immune to things like this, but in reality it was just my lack of knowledge of how certain commands worked. Thankfully it wasn’t a dreaded full wipe and I was able to restore the essence of this blog (I.E. the writing) without too much trouble, however over time it became apparent just how incomplete that restore was. Whilst I was able to restore quite a lot of the pictures I’ve used over the years I was still lacking lots of them, some of them on some of my favourite posts.

The Internet Never Forgets

Thankfully, after writing some rather complicated PowerShell scripts, I was able to bulk restore a lot of images. Mostly this was because of the way I do the screenshots for my reviews, meaning there was a copy of pretty much everything on PC, I just had to find them. I’ve been reviewing games for quite some time though and that’s meant I’ve changed PCs a couple times, meaning some of the images are lost in the sea of old hard drives I have lying around the place. Whilst I was able to scrounge up a good chunk of them by finding an old version of the server I used to host locally there were still some images that eluded me, forcing me to think of other places that might have a copy of them.

My site has been on the Wayback Machine for some time now so I figured that there would (hopefully) be a copy of most of my images on there. For the most part there is, even the full sized ones, however there were still multiple images that weren’t there either. My last bastion of hope was Google’s cache of my website however they only store (or at least, make available) the latest version that they have indexed. Sometimes this meant that I could find an image here or there, as they seem to be archived separately and aren’t deleted if you remove it, however it was still at hit or miss affair. In the end I managed to get the list of missing images down from about 2000 to 150 and thanks to a fortuitous hard drive backup I found most of those will hopefully be eliminated in short order.

What kept me going throughout most of this was the mantra that many privacy advocates and parents alike have parroted many times: the Internet never forgets. For the most part I’d be inclined to agree with this as the vast majority of the information that I had put out there, even though I had erased the source, was still available for anyone to view. However the memory of the Internet, much like that of the humans that run it, isn’t a perfect one, routinely forgetting things, jumbling them up or just plain not remembering them at all. The traces of what you’re searching for are likely there somewhere, but there’s no guarantee that the Internet will remember everything for you.

 

 

Turnbull's Disinterested Face

Turnbull’s MTM NBN Will be Later, Slower and More Expensive.

There’s 2 main reasons why I’ve avoided writing about the NBN for the last couple months. For the most part it’s been because there’s really been nothing of note to report and sifting through hours of senate talks to find a nugget of new information to write about isn’t really something I’m particularly enthused about doing. Secondly as someone who’s deeply interested in technology (and makes his living out of services that could make heavy use of the NBN) the current state of the project is, frankly, infuriating and I don’t think people enjoy reading about how angry I am. Still it seems that the Liberal’s MTM NBN plan has turned from a hypothetical farce into a factual one and I’m not one to pass up an opportunity to lay down criticism where criticism is due.

Turnbull's Disinterested Face

The slogan the Liberal’s ran with during their election campaign was “Fast. Affordable, Sooner.” promising that they’d be able to deliver at least 25Mbps to every Australian by the end of 2016 with that ramping up to 50Mbps by the end of 2019. This ended up being called the Multi-Technology Mix (MTM) NBN which would now include the HFC rather than overbuilding them and would switch to FTTN technology rather than FTTP. The issues with this plan were vast and numerous (ones I’ve covered in great detail in the past) and suffice to say the technology community in Australia didn’t buy into the ideas one bit. Indeed as time as progressed the core promises of the plan have dropped off one by one with NBNCo now proceeding with the MTM solution despite a cost-benefit analysis not being completed and the speed guarantee is now gone completely. If that wasn’t enough it’s come to my attention that even though they’ve gone ahead with the solution NBNCo hasn’t been able to connect a single customer to the FTTN solution.

It seems the Liberal’s promises simply don’t stand up to reality, fancy that.

The issues they seem to be encountering with deploying their FTTN trial are what many of the more vocal critics had been harping on for a long time, primarily the power and maintenance requirements that FTTN cabinets would require. Their Epping trial has faced several months of delays because they weren’t able to source adequate power, a problem which currently doesn’t have a timeline for a solution yet. The FTTP NBN which was using Gigabit Passive Optical Network (GPON) technology does not suffer from this kind of issue at all and this was showing in the ramp up in deployment numbers that NBNCo was seeing before it stopped its FTTP rollouts. If just the trial of the MTM solution is having this many issues then it follows that the full rollout will fare no better and that puts an axe to the Liberal’s election promises.

We’re rapidly approaching the end of this year which means that the timeline the Liberals laid out is starting to look less and less feasible. Even if the trial site gets everyone on board before the end of this year that still gives only 2 years for the rest of the infrastructure to be rolled out. The FTTP NBN wasn’t even approaching those numbers so there’s no way in hell that the MTM solution would be able to accomplish that, even with their little cheat of using the HFC networks.

So there goes the idea of us getting the NBN sooner but do any of their other promises hold true?

Well the speed guarantee went away some time ago so even the Liberals admit that their solution won’t be fast so the only thing they might be able to argue is that they can do it cheaper. Unfortunately for Turnbull his assumption that Telstra would just hand over the copper free of charge something which Telstra had no interest in doing. Indeed as part of the renegotiation of the contract with Telstra NBNCo will be paying some $150 million for access to 200,000 premises worth of copper which, if extrapolated to all of Australia, would be around $5.8 billion. This does not include the cabinets or remediating any copper that can’t handle FTTN speeds which will quickly eat into any savings on the deal. That’s not going into the ongoing costs these cabinets will incur during their lifetimes which is an order of magnitude more than what a GPON network would.

I know I’m not really treading any new ground by writing all this but the MTM NBN is beyond a joke now; a failed election promise that’s done nothing to help the Liberal’s waning credibility and will only do damage to Australia’s technology sector. Even if they do get voted out come next election it’ll be years before the damage can be undone which is a royal shame as the NBN was one of the best bits of policy to come out of the tumultuous time that was Labor’s last 2 terms in office. Maybe one day I’ll be able to look back on all my rants on this topic and laugh about it but until that day comes I’ll just be yet another angry IT sector worker, forever cursing the government that took away my fibre filled dream.

Print Yourself a House.

Ever since I first saw a 3D printer I wondered how long it’d be before they’d start scaling up in size. Now I’m not talking about incremental size improvements that we see every so often (like with the new Makerbot Z18), no I was wondering when we’d get industrial scale 3D printers that could construct large structures. The steps between your run of the mill desktop 3D printer and something of that magnitude isn’t a simple matter of scaling up the various components as many of the assumptions made at that size simply don’t apply when you get into large scale construction. It seems that day has finally come as Suzhou Yingchuang Science and Trade Development Co has developed a 3D printer capable of creating full size houses:

YouTube Preview Image

Details the makeup of the material used, as well as its structural properties, aren’t currently forthcoming however the company behind them claims that it’s about 5 times as hard as traditional building materials. They’re apparently using a few of these 3D printed buildings as offices for some of their employees so you’d figure they’re somewhat habitable although I’m sure they’re in a much more finished state than the ones shown above. Still for a first generation product they seem pretty good and if the company’s claims hold up then they’d become an attractive way to provide low cost housing to a lot of people.

What I’d really be interested to see is how the cost and materials used compares to that of traditional construction. It’s a well known fact that building new housing is an incredibly inefficient process with a lot of materials wasted in during construction. Methods like this provide a great opportunity to reduce the amount of waste generated as there’s no excess material left over once construction has completed. Further refinement of the process could also ensure that post-construction work, like cabling and wiring, are also done in a much more efficient manner.

I’m interested to see how inventive they can get with this as there’s potentially a world of new housing designs out there to exploited using this new method. That will likely be a long time coming however as not everyone will have access to one of these things to fiddle around with but I’m sure just the possibility of a printer of this magnitude has a few people thinking about it already.

Windows Threshold

Windows Threshold: Burying Windows 8 for the Sake of 9.

It’s hard to deny that Windows 8 hasn’t been a great product for Microsoft. In the 2 years that it’s been on the market it’s managed to secure some 12% of total market share which sounds great on the surface however its predecessor managed to nab some 40% in a similar time frame. The reasons behind this are wide and varied however there’s no mistaking that a large part of it was the Metro interface which just didn’t sit well with primarily desktop users. Microsoft, to their credit, has responded to this criticism by giving consumer what they want but like Vista the product that Windows 8 today is overshadowed by it’s rocky start. It seems clear now that Microsoft is done with Windows 8 as a platform and is now looking towards its successor, codenamed Windows Threshold.

Windows ThresholdNot a whole lot is known about what Threshold will entail but what is known points to a future where Microsoft is distancing itself from Windows 8 in the hopes of getting a fresh start. It’s still not known whether or not Threshold will become known as Windows 9 (or whatever name they might give to it) however the current release date is slated for sometime next year, on time with Microsoft’s new dynamic release schedule. This would also put it at 3 years after the initial release of Windows 8 which also ties into the larger Microsoft product cycle. Indeed most speculators are pegging Threshold to be much like the Blue release of last year with all Microsoft products receiving an update upon release. What interests me about this release isn’t so much of what it contains, more what it’s going to take away from Windows 8.

Whilst Microsoft has made inroads to making Windows 8 feel more like its predecessors the experience is still deeply tied to the Metro interface. Pressing the windows key doesn’t bring up the start menu and Metro apps are still have that rather obnoxious behaviour of taking over your entire screen. Threshold however is rumoured to do away with this, bringing back the start menu with a Metro twist that will allow you to access those kinds of applications without having to open up the full interface. Indeed for desktop systems, those that are bound to a mouse and keyboard, Metro will be completely disabled by default. Tablets and other hybrid devices will still retain the UI with the latter switching between modes depending on what actions occur (switch to desktop when docked, Metro when in tablet form).

From memory such features were actually going to make up parts of the next Windows 8 update, not the next version of Windows itself. Microsoft did add some similar features to Windows 8 in the last update (desktop users now default to desktop on login, not Metro) but the return of the start menu and the other improvements are seemingly not for Windows 8 anymore. Considering just how poor the adoption rates of Windows 8 has been this isn’t entirely surprising and Microsoft might be looking for a clean break away from Windows 8 in order to drive better adoption of Threshold.

It’s a strategy that has worked well for them in the past so it shouldn’t be surprising to see Microsoft doing this. For those of us who actually used Vista (after it was patched to remedy all the issues) we knew that Windows 7 was Vista under the hood, it was just visually different enough to break past people’s preconceptions about it. Windows Threshold will likely be the same, different enough from its direct ancestor that people won’t recognise it but sharing the same core that powered it. Hopefully this will be enough to ensure that Windows 7 doesn’t end up being the next XP as I don’t feel that’s a mistake Microsoft can afford to keep repeating.

 

Samsung 850 Pro V-NAND SSD

Samsung’s V-NAND Has Arrived, and It’s Awesome.

When people ask me what one component on their PC they should upgrade my answer is always the same: get yourself a SSD. It’s not so much the raw performance characteristics that make the upgrade worth it, more all those things that many people hate about computers seem to melt away when you have a SSD behind it. All your applications load near instantly, your operating system feels more responsive and those random long lock ups where your hard drive seems to churn over for ages simply disappears. However the one drawback is their size and cost, being an order of magnitude above the good old spinning rust. Last year Samsung announced their plans to change that with V-NAND and today they deliver on that promise.

Samsung 850 Pro V-NAND SSD

The Samsung 850 Pro is the first consumer drive to be released with V-NAND technology and is available in sizes up to 1TB. The initial promise of 128Gbit per chip has unfortunately fallen a little short of its mark with this current production version only delivering around 86Gbit per chip. This is probably due to economical reasons as the new chips under the hood of this SSD are smaller than the first prototypes which helps to increase the yield per wafer. Interestingly enough these chips are being produced on an older lithography process, 30nm instead of the current standard 20nm for most NAND chips. That might sound like a step back, and indeed it would be for most hardware, however the performance of the drive is pretty phenomenal, meaning that V-NAND is going to get even better with time.

Looking at the performance reviews the Samsung 850 Pro seems to be a top contender, if not the best, in pretty much all of the categories. In the world of SSDs having consistently high performance like this across a lot of categories is very unusual as typically a drive manufacturer will tune performance to a certain profile. Some favour random reads, others sustained write performance, but the Samsung 850 Pro seems to do pretty much all of them without breaking a sweat. However what really impressed me about the drive wasn’t so much the raw numbers, it was how the drive performed over time, even without the use of TRIM.

samsung 850 pro 512gb - hdtach-3-

SSDs naturally degrade in performance over time, not due to the components wearing out but due to the nature of how they read and write data. Essentially it comes down to blocks needing to be checked to see if they’re free or not before they can be written to, a rather costly process. A new drive has all blank space which means these checks don’t need to be done but over time they’ll get into unknown states due to all the writing and rewriting. The TRIM command tells SSDs that certain blocks have been freed up, allowing the drive to flag them as unused, recovering some of the performance. The graph above shows what happens when the new Samsung 850 Pro reaches that performance degradation point even without the use of TRIM. If you compare that to other SSDs this kind of consistent performance almost looks like witchcraft but it’s just the V-NAND technology showing one of its many benefits.

Indeed Samsung is so confident in these new drives it’s giving all of them a 10 year warranty, something you can’t find even on good old spinning rust drives anymore. I’ll be honest when I first read about V-NAND I had a feeling that the first drives would likely be failure ridden write offs, like most new technologies are. However this new drive from Samsung appears to be the evolutionary step that all SSDs need to take as this first iteration device is just walking all over the competition. I was already sold on a Samsung SSD for my next PC build but I think an 850 Pro just made the top of my list.

Now if only those G-SYNC monitors could come out already, then I’d be set to build my next gen gaming PC.

Google Cardboard

Google’s Cardboard: VR For The Masses.

I can remember my first encounter with virtual reality way back in the 90s. It was a curiosity more than anything else, something that was available at this one arcade/pizza place in the middle of town. You’d go in and there it would be, two giant platforms containing people with their heads strapped into oversized head gear. On the screens behind them you could see what they were seeing, a crude polygonal world inhabited by the other player and a pterodactyl. I didn’t really think much of it at the time, mostly since I couldn’t play it anywhere but there (and that was an hour drive away) but as I grew older I always wondered what had become of that technology. Today VR is on the cusp of becoming mainstream and it looks like Google wants to thrust it into the limelight.

Google Cardboard

Meet Google Cardboard, the ultra low cost virtual reality headset that Google gave out to every attendee at I/O this year. It’s an incredibly simple idea, using your smartphone’s screen and to send different images to your eyes. Indeed if you were so inclined a similar system could be used to turn any screen into a VR headset, although the lenses would need to be crafted for the right dimensions. With that in mind the range of handsets that Google Cardboard supports is a little limited, mostly to Google Nexus handsets and some of their closely related cousins, but I’m sure that future incarnations that support a wide range of devices won’t be too far off. Indeed if the idea has piqued your interest enough you can get an unofficial version of it for the low cost of $25, a bargain if you’re looking to dabble with VR.

Compared to the original OculusVR specs most smartphones are more than capable of driving Google Cardboard with an acceptable level of performance. My current phone, the Sony Xperia Z, has a full 1080p resolution and enough grunt to run some pretty decent 3D applications. That combined with the bevy of sensors that are in most modern smartphones make Google Cardboard a pretty brilliant little platform for testing out what you can do with VR. Of course that also means the experience you can get with this will vary wildly depending on what handset you have but for those looking for a cheap platform to validate ideas on it’s hard to argue against it.

Of course this begs the question as to what Google’s larger plan is for introducing this concept to the world. Ever since the breakaway success that was the OculusVR it’s been obvious that there’s consumer demand for VR and it only seems to be increasing as time goes on. However most applications are contained solely within the games industry with only a few interesting experiments (like Living with Lag) breaking outside that mould. There’s a ton of augmented reality applications on Android which could potentially benefit from widespread adoption of something like Cardboard, however beyond that I’m not so sure.

I think it’s probably a gamble on Google’s part as history has proven that throwing out a concept to the masses is a great way to root out innovative ideas. Google might not have any solid plans for developing VR of this nature themselves but the community that arises around the idea could prove a fruitful place for applications that no one has thought of before. I had already committed myself to a retail version of an Oculus when it came out however so whilst Cardboard might be a curiosity my heart is unfortunately promised to another.

Facebook Headquarters

Facebook is Being Creepy Again, But They Didn’t Have to be.

In the now decade long history of Facebook we’ve had numerous scandals around the ideas of privacy and what Facebook should and should not be doing with the data they have on us. For the most part I’ve tended to side with Facebook as whilst I share everyone’s concerns use of the platform is voluntary in nature and should you highly object to what they’re doing you’re free to not use them. The fact is that any service provided to you free of charge needs to make revenue somewhere and for Facebook that comes from your data. However this doesn’t seem to stop people from being outraged at something Facebook does with almost clockwork regularity, the most recent of which was tinkering with people’s feeds to see if emotions could spread like the plague.

Facebook HeadquartersThe results are interesting as they show that emotions can spread through social networks without the need for direct interaction, it can happen by just reading status updates. The experimenters sought to verify this by manipulating the news feeds of some 689,000 Facebook users to skew the emotional content in one direction and then saw how the user’s emotional state fared further down the line. The results confirmed their initial hypothesis showing that emotions expressed on Facebook can spread to others. Whilst it’s not going to cause a pandemic of ecstasy or sudden whirlwind of depression cases worldwide the evidence is there to suggest that your friend’s sentiment on Facebook does influence your own emotional state.

Whilst it’s always nice to get data that you can draw causal links from (like with this experiment) I do wonder why they bothered to do this when they could’ve done much more in depth analysis on a much larger subset of the data. They could have just as easily taken a much larger data set, classified it in the same way and then done the required analysis. This somewhat sneaks around the rather contentious issue of informed consent when it comes to experiments like this as there’s no indication that Facebook approached these individuals before including them in the experiment.

Indeed that’s probably the only issue I have with Facebook doing this as whilst the data they have is theirs to do with as they see fit (within the guidelines of privacy regulations) attempting to alter people’s emotional state is a little too far. The people behind the study have came out and said that the real impact wasn’t that great and it was all done in aid of making their product better something which I’m sure is of little comfort to those who object to the experiment in the first place. Whilst the argument can be made that Facebook already manipulates users feeds (since you don’t see everything that your friends post anymore) doing so for site usability/user engagement is one thing, performing experiments on them without consent is another.

If Facebook wants to continue these kinds of experiments then they should really start taking steps to make sure that its user base is aware of what might be happening to them. Whilst I’m sure people would still take issue to Facebook doing widespread analysis on user’s emotional state it would be a far cry from what they did with this experiment, one that would likely not run afoul of established experimental standards. The researchers have said they’ll take the reaction to these results under advisement which hopefully means that they might be more respectful of their user’s data in the future. However since we’re going on 10 years of Facebook doing things like this I wouldn’t hold my breath for immediate change.

 

Screenshot_2014-06-25-10-36-44

Recycling Electromagnetic Energy? iFind, Surely You Jest.

If you’re reading this article, which is only available through the Internet, then you’re basking in a tsunami of electromagnetic radiation. Don’t worry though, the vast majority of these waves are so low power that they don’t make it through the first layer of your skin before dissipating harmlessly. Still they do carry power, enough so that this article can worm its way from the server all the way to the device that you’re reading it on. Considering just how pervasive wireless signals are in our modern lives it then follows that there’s a potential source of energy there, one that’s essentially free and nigh on omnipresent. Whilst this is true, to some extent, actually harvesting a useful amount of it is a best impractical but that hasn’t stopped people from trying.

Screenshot_2014-06-25-10-36-44

If you’re a longtime fan of Mythbusters like myself you’ll likely remember the episode they did on Free Energy back in 2004. In that episode they tested a myriad of devices to generate electricity, one of them being a radio wave extractor that managed to power half of a wristwatch. In an unaired segment they even rigged up a large coil of wire and placed it next to a high voltage power line and were able to generate a whopping 8mV. The result of all this testing was to show that, whilst there is some power available for harvesting, it’s not a usable quantity by any stretch of the imagination.

So you can imagine my surprise when a product like iFind makes claims like “battery free” and “never needs recharging” based around the concept of harvesting energy from the air.

The fundamental functionality of the iFind isn’t anything new, it’s just yet another Bluetooth tag system so you don’t lose whatever you attach the tag to. It’s claim to fame, and one that’s earned it a rather ridiculous half a million dollars, is that it doesn’t have a battery (which it does, unless you want to get into a semantic argument about what “battery” actually means) and that it charges off the electromagnetic waves around you. They’ve even gone as far to provide some technical documentation that shows the power generated from various signals. Suffice to say I think their idea is unworkable at best and, at worst, outright fraud.

The graphs they show in this comment would seem to indicate that it’s capable of charging even under very weak signal conditions, all the way down to -6dBm. That sounds great in principle until you take in account what a typical charging scenario for a device like this would be, like the “ideal” one that they talk about in some of their literature: a strong wifi signal. The graph shown above is the signal strength of my home wifi connection (an ASUS RT-N66U for reference) with the peak readings being from when I had my phone right next to the antennas. That gives a peak power output of some -22dBM, which sounds fine right? Well since those power ratings are logarithmic in nature the amount of power output is about 200 times weaker which puts the actual charge time at about 1000 days. If you had a focused RF source you could probably provide it with enough power to charge quickly but I doubt anyone has them in their house.

There’s also the issue of what kind of power source they have as the size precludes it from being anything hefty and they’re just referring to it as a “power bank”. Non-rechargeable batteries that fit within that form factor are usually on the order of a couple hundred milliamps with rechargeable variants having a much smaller capacity. Similar devices like Tile, which includes a non-rechargeable non-replaceable battery, lasts about a year before it dies which suggests a minimum power drain of at least a couple mAh per day. Considering iFind is smaller and rechargeable I wouldn’t expect it to last more than a couple weeks before giving it up, Of course since there’s no specifications on either of them it’s hard to judge but the laws of physics don’t differ between products.

However I will stop short of calling iFind a scam, more I think it’s a completely misguided exercise that will never deliver on its promises. They’ve probably designed something that does work under their lab circumstances but the performance will just not hold up in the real world. There’s a lot of questions that have been asked of them that are still unanswered which would go a long way to assuring people that what they’re making isn’t vaporware. Until they’re forthcoming with more information however I’d steer clear of giving them your money as it’s highly unlikely that the final product will perform as advertised.