Technology

3D Printed Model Jet Engine Demonstrates Reverse Thrust.

Have you ever wondered how planes manage to slow down so fast? It’s not that they have amazing brakes, although they do have some of the most impressive disc brakes you’ll ever see, no most of the work is done by the very thing that launches them into the sky: the engines. The way they achieve this is called thrust reversal and, as the name would imply, it redirects the thrust that the engine is generating in the opposite direction, slowing the craft down rather than accelerating it. The way modern aircraft achieve this is wide and varied but one of the most common ways is demonstrated perfectly with this amazing 3D printed scale model:

The engine that the model is based off of is a General Electric GEnx-1B, an engine that’s found in the revamped Boeing 747-8 as well as Boeing’s new flagship plane the 787. Whilst this model lacks the complicated turbofan internals that its bigger brothers have (replaced by a much simpler electric motor) the rest of it is to specification, including the noise reducing chevrons at the rear and, most importantly, the thrust reversal mechanism. What’s most impressive to me is that the whole thing was printed on your run of the mill extruder based 3D printer. If you’re interested in more details about the engine itself there’s an incredible amount of detail over in the forum where the creator first posted it.

As you can see from the video when the nacelle (the jet engine’s cover) slides back a series of fins pop up, blocking the fan’s output from exiting out of the rear of the engine. At the same time a void opens up allowing the thrust to exit out towards the front of the engine. This essentially changes the engine from pulling the craft through the air to pushing back against it, reducing the aircraft’s speed. For all modern aircraft, even ones that use a turboprop rather than a fan, this is how they reduce their speed once they’ve touched down.

Many of us have likely seen jet engines doing exactly that but the view that this model gives us of the engine’s internals is just spectacular. It’s one of those things that you don’t often think about when you’re flying but without systems like these there’s no way we’d be flying craft as big as the ones we have today.

IMG_20150823_224228

Windows 10: Much The Same, and That’s Just Fine.

New Windows releases bring with them a bevy of new features, use cases and controversy. Indeed I can think back to every new Windows release dating back to Windows 95 and there was always something that set off a furor, whether it was UI changes or compatibility issues. For us technical folk though a new version of Windows brings with it opportunity, to experiment with the latest tech and dream about where we’ll take it. For the last month I’ve been using Windows 10 on my home machines and, honestly, whilst it feels much like its Windows 8.1 predecessor I don’t think that’s entirely a bad thing.

IMG_20150823_224228

Visually Windows 10 is a big departure from its 8 and 8.1 predecessors as, for any non-tablet device, the full screen metro app tray is gone, replaced with a more familiar start menu. The full screen option is still there however, hiding in the notifications area under the guise of Tablet Mode, and for transformer or tablet style devices this will be the default option. The flat aesthetic has been taken even further again with all the iconography being reworked, ironing out almost any 3D element. You’re also not allowed to change the login screen’s laser window background without the aid of a resource hacker, likely due to the extreme amount of effort that went into creating the image.

For most, especially those who didn’t jump in the Windows 8 bandwagon, the navigation of the start menu will familiar although I must admit after the years I’ve spent with its predecessor it’s taken some getting used to. Whilst the charms menu might have disappeared the essence of it appears throughout Windows 10, mostly in the form of settings panels like Network Settings. For the most part they do make the routine tasks easier, like selecting a wifi network, however once things get complicated (like if you have say 2 wireless adapters) then you’re going to have to root around a little bit to find what you’re looking for. It is a slightly better system than what Windows 8 had, however.

To give myself the full Windows 10 experience I installed it on 2 different machines in 2 different ways. The first was a clean install on the laptop you see above (my trusty ASUS Zenbook UX32V) and that went along without a hitch. For those familiar with the Windows 8 style installer there’s not much to write home about here as it’s near identical to the previous installers. The second install was an upgrade on my main machine as, funnily enough, I had it on good word that the upgrade process was actually quite useable. As it turns out it is as pretty much everything came across without a hitch. The only hiccup came from my audio drivers not working correctly (seemed to default to digital out and wouldn’t let me change it) however a reinstall of the latest drivers fixed everything.

In terms of features there’s really not much in the way of things I’d consider “must haves” however that’s likely because I’ve been using many of those features since Windows 8 was first released. There are some interesting little additions however like the games features that allow you to stream, record and capture screenshots for all DirectX games (something which Windows will remind you about when you start them up). Microsoft Edge is also astonishingly fast and quite useable however since it’s so new the lack of extensions for it have precluded me from using it extensively. Interestingly Internet Explorer still makes an appearance in Windows 10, obviously for those corporate applications that continue to require it.

Under the hood there’s a bevy of changes (which I won’t bore you with here) however the most interesting thing about them is the way Windows 10 is structured for improvements going forward. You see Windows 10 is currently slated to be the last major release of Windows ever but this doesn’t mean that it will remain stagnant. Instead new features will be released incrementally on a much more frequent basis. Indeed the roadmaps I’ve seen show that there are several major releases planned in the not too distant future and indeed if you want a peek at the them all you need to do is sign up for the Windows Insider program. Such a strategy could reap a lot of benefits, especially for organisations seeking to avoid the heartache of Windows version upgrades in the future.

All in all Windows 10 is pretty much what I expected it to be. It has the best parts of Windows 7 and 8 and mashed together into a cohesive whole that should appease the majority of Windows users. Sure there are some things that some won’t like, the privacy settings being chief among them, however they’re at least solvable issues rather than showstoppers like Vista’s compatibility or 8’s metro interface. Whether Microsoft’s strategy of no more major versions ever is tenable or not is something we’ll have to see over the coming years but at the very least they’ve got a strong base with which to build from.

DARPA_SyNAPSE_16_Chip_Board

An Artificial Brain in Your Pocket.

Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.

DARPA_SyNAPSE_16_Chip_Board

The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.

But why, I hear you asking, would you want a computerized brain in your pocket?

IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.

The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.

Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.

1.jpg

Simple Code Change Would Defeat RollJam, The $30 Device That Can Unlock Almost Any Car.

There are many things that we trust implicitly, often by the simple idea that since it’s everywhere or that many people use it then it must be safe. It’s hard not to do this as few of us possess the knowledge and understanding of all the systems we use in order to establish explicit trust. Indeed it’s often the case that these systems are considered safe until a flaw is exposed in them, then leading to a break in trust which then must be reestablished. One such system, the keyless entry fobs many of us have with our cars, has just proven itself to be vulnerable to attack but it all could have been avoided with an incredibly simple change to the underlying code.

1.jpgKeyless entry on your car relies on a fairly simple system for its operation. What happens when you press the unlock button is that a code is wirelessly transmitted from your fob to your car, unlocking the doors. Back in the early days the code that these fobs sent was unique and fixed which, whilst preventing one person’s fob from opening your car, meant it was incredibly simple to copy the code. This was then changed to the current standard of a “rolling code” which changes every time you press the key. This made straight up duplication impossible, as the same code is never used twice, however it opened it up to another, more subtle, attack.

Whilst the codes changed every time the one thing that the manufacturers of these systems didn’t do was invalidate codes that had already been used. This was primarily due to convenience as there’s every chance your fob got pressed when you weren’t in range of the car, burning a code. However the problem with this system is that should someone capture that code they could then use it to unlock your car at a later date. Indeed there had been many proof of concept systems developed to do this however the latest one, a $30 gadget called RollJam, takes the process to a whole new level.

The device consists of a receiver, transmitter and signal jammer. When the device is activated it will actively jam any wireless key entry signal, stopping it from reaching the car. Then, when a user presses their key fob to unlock their doors, it captures the code that was sent. This stops the doors from unlocking however nearly all users will simply press it again, sending another code. RollJam then transmits the first code to the car, unlocking the doors, whilst capturing the other code. The user can now enter their car and RollJam now has a code stored which it can use to gain access at a later date. The device appears to work on most major brands of vehicles with only a few of the more recent models being immune to the attack.

What amazes me is that such an attack could’ve easily been prevented by including an incremental counter in the key fob. Then when transmitting a code the fob also sends with it the current count, meaning that any code sent with a previous number is void. This can also be defeated by making the codes expire after some time which, I admit, is a little more difficult to implement but surely not beyond the capability of companies with billions of dollars in annual revenue. To their credit some companies have made headway in preventing such an attack however that won’t mean a lot for all the cars that are currently out there with systems that are susceptible to such an attack.

In the end it comes down to a combination of convenience and bottom dollar programming that led such a pervasive system being as broken as it is. Unfortunately unlike IT systems, which can be patched against such vulnerabilities, these keyless entry systems will likely remain vulnerable as long as they’re in use. Hopefully current car manufacturers take note of this issue and work to address it in future models as, honestly, it seems like one of the most rookie mistakes ever.

 

Lexus’ Hoverboard is Deceptive Wankery.

There are some technological ideas that captivate the public consciousness, our want for them to exist outstripping any ideas of practicality or usability. Chief among such ideas is the flying car, the seemingly amazing idea which, should it ever become mainstream, poses far more issues than it could ever solve. Still there have been numerous companies who have worked towards making that idea a reality with nearly all of them meeting the same fate. A close second (or third, if you’re more a jetpack fan) is the hoverboard, a device that replicates the functionality of a skateboard without the wheels. Our collective desire for something like that is what results in videos like the following and, honestly, they give me the shits:

Anyone who’s followed technology like this knows that a hoverboard, one that can glide over any surface, simply isn’t possible with our current understanding of physics and level of technological advancement. However if you grab a couple powerful electromagnets and put them over a metallic surface you can make yourself a decent simulacrum of what a hoverboard might be, it just can’t leave that surface. Indeed there’s been a few of these kinds of prototypes in the past and, whilst they’re cool and everything, they’re not much more than a demonstration of what a magnet can do.

This is where Lexus comes in with their utterly deceptive bullshit.

Just over a month ago Lexus put out this site showing a sleek looking board that was billowing smoke out its sides, serenely hovering a few inches above the ground. The media went ballistic, seemingly forgetting about what would be required to make something of this nature and the several implementations that came before it. Worst still the demonstration videos appeared to show the hoverboard working on regular surfaces, just like the ones in the movies that captured everyone’s imaginations. Like all good publicity stunts however the reality is far from what the pictures might tell and I lay the blame squarely at Lexus for being coy about the details.

You see the Lexus hoverboard is no different to the others that came before it, it still uses magnets and requires a special surface in order to work. Lexus built that entire set just to demonstrate the hoverboard and was mum about the details because they knew no one would care if they knew the truth. Instead they kept everything secret, making many people believe that they had created something new when in reality they hadn’t, all they did was put a larger marketing budget behind it.

Maybe I’ve just become an old cynic who hates fun but, honestly, I really got the shits with Lexus and the wider public’s reaction to this malarkey. Sure it looks cool, what with the slick design and mist cascading over the sides, but that’s about where it ends. Everything past that is Lexus engaging in deceptive marketing tactics to make us think it’s more than it is rather than being straight up about what they did. Of course they likely don’t care about what a ranty blogger on a dark corner of the Internet thinks, especially since he’s mentioned their brand name 10 times in one post, but I felt the need to say my peace, even if it wouldn’t change anything.

ozo-press-photo-nature

Nokia Resurfaces as a…Virtual Reality Video Startup?

Nokia was once the king of the phones that everyone wanted. For many it was because they made a solid handset that did what it needed to do: make calls and send text messages. Their demise came at their inability to adapt to the rapid pace of innovation that was spurred on by Apple and Google, their offerings in the smartphone space coming too late, their customers leaving for greener pastures. The result was that their handset manufacturing capability was offloaded to Microsoft but a small part of Nokia remained independent, one that held all the patents and their research and development arm. It seems that that part of Nokia is looking to take it in crazy new directions with their first product being the Ozo, a 360 degree virtual reality video camera.

ozo-press-photo-nature

Whilst Nokia isn’t flooding the newswaves with details just yet we do know that the Ozo is a small spherical device that incorporates 8 cameras and microphones that are able to capture video and sound from any angle. It’s most certainly not the first camera of its kind with numerous competitors already having products available in this space but it is most certainly one of the better looking offerings out there. As for how it’d fare against its competition that’s something we’ll have to wait to see as the first peek at the Ozo video is slated to come out just over a week from now.

At the same time Nokia has taken to the Tongal platform, a website that allows brands like Nokia to coax filmmakers into doing stuff for them, to garner proposals for videos that will demonstrate the “awesomeness” of the Ozo platform. To entice people to participate there’s a total of $42,000 and free Ozo cameras up for grabs for two lucky filmmakers, something which is sure to attract a few to the platform. Whether that’s enough to make them the platform of choice for VR filmmakers though is another question, one I’m not entirely sure that Nokia will like the answer to.

You see whilst VR video has taken off of late due to YouTube’s support of the technology it’s really just a curiosity at this point. The current technology strictly prohibits it from making its way into cinemas, due to the fact that you’d need to strap an Oculus Rift or equivalent to your head to experience it. Thus it’s currently limited in appeal to tech demos, 3D renderings and a smattering of indie things. Thus the market for such a device seems pretty small, especially when you consider there’s already a few players selling their products in this space. So whilst Nokia’s latest device may be a refreshing change for the once king of phones I’m not sure it’ll become much more than a hobby for the company.

Maybe that’s all Nokia is looking for here, throwing a wild idea out to the public to see what they’d make of it. Nokia wasn’t exactly known for its innovation once the smartphone revolution began but perhaps they’re looking to change that perception with the Ozo. I’m not entirely convinced it will work out for them, anyone can throw together a slick website with great press shots, but the reaction from the wider press seems to indicate that they’re excited about the potential this might bring.

3D_XPoint_Die

Intel and Micron Announce 3D Xpoint Memory.

The never-ending quest to satisfy Moore’s Law means that we’re always looking for ways to making computers faster and cheaper. Primarily this focuses on the brain of the computer, the Central Processing Unit (CPU), which in most modern computers is now how to transistors numbering in the billions. All the other components haven’t been resting on their laurels however as shown by the radical improvement in speeds from things like Solid State Drives (SSDs), high-speed interconnects and graphics cards that are just as jam-packed with transistors as any CPU is. One aspect that’s been relatively stagnant however has been RAM which, whilst increasing in speed and density, has only seen iterative improvements since the introduction of the first Double Data Rate (DDR). Today Intel and Micron have announced 3D Xpoint, a new technology that sits somewhere between DRAM and NAND in terms of speed.

3D_XPoint_Die

Details on the underlying technology are a little scant at the moment however what we do know is that instead of storing information by trapping electrons, like all memory currently does, 3D Xpoint (pronounced cross point) instead stores bits via a change in resistance of the memory material. If you’re like me you’d probably think that this was some kind of phase change memory however Intel has stated that it’s not. What they have told us is that the technology uses a lattice structure which doesn’t require transistors to read and write cells, allowing them to dramatically increase the density, up to 128GB per die. This also comes with the added benefit of being much faster than current NAND technologies that power SSDs although slightly slower than current DRAM, albeit with the added advantage of being non-volatile.

Unlike most new memory technologies which often purport to be the replacements for one type of memory or another Intel and Micron are position 3D Xpoint as an addition to the current architecture. Essentially your computer has several types of memory, all of which are used for a specific purpose. There’s memory directly on the CPU which is incredibly fast but very expensive, so there’s only a small amount. The second type is the RAM which is still fast but can be had in greater amounts. The last is your long term storage, either in the form of spinning rust hard drives or a SSD. 3D Xpoint would sit in between the last two, providing a kind of high speed cache that could hold onto often used data that’s then persisted onto disk. Funnily enough the idea isn’t that novel, things like the XboxOne use a similar architecture, so there’s every chance that it might end up happening.

The reason why this is exciting is because Intel and Micron are already going into production with these new chips, opening up the possibility of a commercial product hitting our shelves in the very near future. Whilst integrating it in the way that they’ve stated in the press release would take much longer, due to the change in architecture, there’s a lot of potential for a new breed of SSD drives to be based on this technology. They might be an order of magnitude more expensive than current SSDs however there are applications where you can’t have too much speed and for those 3D Xpoint could be a welcome addition to their storage stack.

Considering the numerous technological announcements we’ve seen from other large vendors that haven’t amounted to much it’s refreshing to see something that could be hitting the market in short order. Whilst Intel and Micron are still being mum on the details I’m sure that the next few months will see more information make its way to us, hopefully closely followed by demonstrator products. I’m very interested to see what kind of tech is powering the underlying cells as a non-phase change, resistance based memory is something that would be truly novel and, once production hits at-scale levels, could fuel another revolution akin to the one we saw with SSDs all those years ago. Needless to say I’m definitely excited to see where this is heading and I hope Intel and Micron keep us in the loop with the new developments.

Technology 2013 montage

facebook
online shopping
icloud
twitter
wi-fi
NBNCo national broadband

Broadband Has to Remain Expensive Because…NBN? Come on…

In terms of broadband Australia doesn’t fair too well, ranking somewhere around 58th in terms of speed whilst being among some of the most expensive, both in real dollar terms as well as in dollars per advertised megabit. The original FTTN NBN would’ve elevated us out of the Internet doldrums however the switch to the MTM solution has severely dampened any hopes we had of achieving that goal. However if you were to ask our current communications minister, the esteemed Malcolm Turnbull, what he thought about the current situation he’d refer you to a report that states we need to keep broadband costs high in order for the NBN to be feasible. Just like with most things that he and his department have said about the NBN this is completely incorrect and is nothing more than pandering to current incumbent telcos.

Technology 2013 montage facebook online shopping icloud twitter wi-fi NBNCo national broadband

The argument in the submission centers around the idea that if current broadband prices are too cheap then customers won’t be compelled to switch over to the new, obviously vastly more expensive, NBN. The submission makes note that even a 10% reduction in current broadband prices would cause this to happen, something which could occur if Telstra was forced to drop their wholesale prices. A quick look over the history of the NBN and broadband prices in Australia doesn’t seem to support the narrative they’re putting forward however, owing mostly to the problems they claim would come from a price drop already happening within Australia.

You see if you take into consideration current NBN plan pricing the discrepancies are already there, even when you go for the same download speeds. A quick look at iiNet’s pricing shows that your bog standard ADSL2+ connection with a decent amount of downloads will cost you about $50/month whereas the equivalent NBN plan runs about $75/month. Decreasing the ADSL2+ plan by 10%, a whopping $5, isn’t going to change much when there’s already a $25/month price differential between the two. Indeed if people only choose the cheaper option then we should’ve seen that in the adoption rates of the original NBN, correct?

However as the adoption rates have shown Australians are ready, willing and able to pay a premium for better Internet services and have been doing so for years with the original FTTP NBN. The fact of the matter is that whilst ADSL2+ may advertise NBN level speeds it almost always delivers far less than that with most customers only getting a fraction of the speeds they are promised. The FTTP NBN on the other hand delivers exactly the kind of speeds it advertises and thus the value proposition is much greater than its ADSL2+ equivalent. The MTM NBN won’t have this capability unfortunately due to its mixed use of FTTN technologies which simply can’t make the same promises about speed.

It’s things like this that do nothing to endear the Liberal party to the technical vote as it’s so easy to see through the thin veil of political posturing and rhetoric. The facts on this matter are clear, Australians want better broadband and they’re willing to pay for it. Having cheaper options aren’t going to affect this, instead they will provide the opportunity for those who are currently locked out of the broadband market to get into it. Then for those of us who have a need for faster Internet connections we’ll happily pay the premium knowing full well that we’ll get the speeds that are advertised rather than a fraction of them. The sooner the Liberal party wakes up and realises things like this the better, but I’m not holding out any hopes that they will.

7484.Restart-warning_01455B5B

Windows 10 to Have Mandatory Updates for Home Users.

Left to their own devices many home PC users will defer installing updates for as long as humanly possible, most even turning off the auto-updating system completely in order to get rid of those annoying pop ups. Of course this means that exploits, which are routinely patched within days of them being discovered, are often not installed. This leaves many unnecessarily vulnerable to security breaches, something which could be avoided if they just installed the updates once in a while. With Windows 10 it now seems that most users won’t have a choice, they’ll be getting all Microsoft updates regardless of whether they want them or not.

7484.Restart-warning_01455B5B

Currently you have a multitude of options to select from when you subscribe to Windows updates. The default setting is to let Windows decide when to download, install and reboot your computer as necessary. The second does all the same except it will let you choose when you want to reboot, useful if you don’t leave your computer on constantly or don’t like it rebooting at random. The third option is essentially just a notification option that will tell you when updates are available but it’ll be up to you to choose which ones to download install. The last is, of course, to completely disable the service something which not many IT professionals would recommend you do.

Windows 10 narrows this down to just the first two options for Home version users, removing the option for them to not install updates if they don’t want to. This is not just limited to a specific set of updates (like say security) either as feature updates as well as things as drivers could potentially find their way into this mandatory system. Users of the Pro version of Windows 10 will have the option to defer feature updates for up to 8 months (called Current Branch for Business) however past that point they’ll be cut off from security updates, something which I’m sure none of them want. The only version of Windows 10 that will have long term deferral for feature updates will be the Enterprise version which can elect to only receive security updates between major Windows updates.

Predictably this has caught the ire of many IT professionals and consumers alike, mostly due to the inclusion of feature updates in the mandatory update scheme. Few would argue that mandatory security updates are a bad thing, indeed upon first hearing about this that’s what I thought it would be, however lumping in Windows feature updates alongside it makes a much less palatable affair. Keen observers have pointed out that this is likely due to Microsoft attempting to mold Windows into an as-a-service offering alongside their current offerings like Office 365. For products like that continuous (and mandatory) updates aren’t so much of a problem since they’re vetted against a single platform however for home users it’s a little bit more problematic, given the numerous variables at play.

Given that Windows 10 is slated to go out to the general public in just over a week it’s unlikely that Microsoft will be drastically changing this position anytime soon. For some this might be another reason for them to avoid upgrading to the next version of Windows although I’m sure the lure of a free version will be hard to ignore. For businesses though it’s somewhat less of an issue as they still have the freedom to update how they please. Microsoft has shown however that they’re intent on listening to their consumer base and should there be enough outrage about this then there’s every chance that they’ll change their position. This won’t be stopping me from upgrading, of course, but I’m one of those people who has access to any version I may want.

Not everyone is in as fortunate position as I am.

88339d8cc8_polymere_fullerene

Polymer Photovoltaic Cells See Efficiency Boost with Better Designs.

The solar cells you see on many roofs today are built out of silicon, the same stuff that powers your computer and smartphone. The reasons for this are many but it mostly comes down to silicon’s durability, semiconductor properties and ease at which we can mass produce them thanks to our investments in semiconductor manufacturing. However they’re not the only type of solar cell we can create, indeed there’s a different type that’s based on polymers (essentially plastic) that has the potential to be much cheaper to manufacture. However the technology is still very much in its infancy with the peak efficiency (the rate at which it can convert sunlight into electricity) being around 10%, far below even that available from commercial grade panels. New research however could change that dramatically.

88339d8cc8_polymere_fullerene

The current standard for organic polymer based solar cells utilizes two primary materials. The first is, predictably, an organic polymer that can accept photons and turn them into electronics. These polymers are then doped with a special structure of carbon called fullerene, more commonly known as buckyballs (which derive their name from their soccer ball like structure). However the structures that form with current manufacturing processes are somewhat random. This often means that when a photon produces a free electron it recombines before it can be used to generate electricity which is what leads to polymer cell’s woeful efficiency. New research however points to a way to give order to this chaos and, in the process, greatly improve the efficiency.

Researchers at the USA’s Department of Energy’s SLAC National Accelerator Laboratory have developed a method to precisely control the layout of the polymers and fullerene, rather than the jumbled mess that is currently standard. They then used this method to test various different arrangements to see which one produced the highest efficiency. Interestingly the best arrangement was one that mimicked the structure we see in plants when they photosynthesize. This meant that the charge created in the polymer by a photon wasn’t recombined instantly like it usually was and indeed the polymers were able to hold charge for weeks, providing a major step up in efficiency.

Whilst this research will go a long way to solving one of the major problems with polymer based solar cells there are still other issues that will need to be addressed before they become commercially viable. Whilst a typical silicon solar cell will last 20 years or more a polymer one will only last a fraction of that time, usually only 4 years or so with current technology. For most solar cells that amount of time is when they’ve just paid back their initial investments (both in terms of energy and revenue) so until they get past this roadblock they will remain an inferior product.

Still research like this shows there’s potential for other technologies to compete in the same space as silicon, even if there are still drawbacks to be overcome. Hopefully this research will provide further insights into increasing the longevity of these panels at the same time as increasing their efficiency. Then polymer panels could potentially become the low cost, mass produced option enabling a new wave of investment to come from consumers who were previously locked out by current photovoltaic pricing.