The computer (or whatever Internet capable device you happen to be viewing this on) is made up of various electronic components. For the most part these are semiconductors, devices which allow the flow of electricity but don’t do it readily, but there’s also a lot of supporting electronics that are what we call fundamental components of electronics. As almost any electrical enthusiast will tell you there are 3 such components: the resistor, capacitor and inductor each of them with their own set of properties that makes them useful in electronic circuits. There’s been speculation of a 4th fundamental component for about 40 years but before I talk about that I’ll need to give you a quick run down on what the current fundamentals properties are.
The resistor is the simplest of the lot, all it does is impede the flow of electricity. They’re quite simple devices, usually a small brown package banded by 4 or more colours which denotes just how resistive it actually is. Resistors are often used as current limiters as the amount of current that can pass through them is directly related to the voltage and level of resistance of said resistor. In essence you can think of them as narrow pathways in which electric current has to squeeze through.
Capacitors are intriguing little devices and can be best thought of as batteries. You’ve seen them if you’ve taken apart any modern device as they’re those little canister looking things attached to the main board of said device. They work by storing charge in an electrostatic field between two metal plates that’s separated by an insulating material called a dielectric. Modern day capacitors are essentially two metal plates and the dielectric rolled up into a cylinder, something which you could see if you cut one open. I’d only recommend doing this with a “solid” capacitor as the dielectrics used in other capacitors are liquids and tend to be rather toxic and/or corrosive.
Inductors are very similar to capacitors in the respect that they also store charge but instead of an electrostatic field they store it in a magnetic field. Again you’ve probably seen them if you’ve cracked open any modern device (or say looked inside your computer) as they look like little circles of metal with wire coiled around them. They’re often referred to as “chokes” as they tend to oppose the current that induces the magnetic field within them and at high frequencies they’ll appear as a break in the circuit, useful if you’re trying to keep alternating current out of your circuit.
For quite a long time these 3 components formed the basis of all electrical theory and nearly any component could be expressed in terms of them. However back in 1971 Leon Chua explored the symmetry between these fundamental components and inferred that there should be a 4th fundamental component, the Memristor. The name is a combination of memory and resistor and Chua stated that this component would not only have the ability to remember its resistance, but also have it changed by passing current through it. Passing current in one direction would increase the resistance and reversing it would decrease it. The implications of such a component would be huge but it wasn’t until 37 years later that the first memristor was created by researchers in HP’s lab division.
What’s really exciting about the memristor is its potential to replace other solid state storage technologies like Flash and DRAM. Due to memristor’s simplicity they are innately fast and, best of all, they can be integrated directly onto the chip of processors. If you look at the breakdown of a current generation processor you’ll notice that a good portion of the silicone used is dedicated to cache, or onboard memory. Memristors have the potential to boost the amount of onboard memory to extraordinary levels, and HP believes they’ll be doing that in just 18 months:
Williams compared HP’s resistive RAM technology against flash and claimed to meet or exceed the performance of flash memory in all categories. Read times are less than 10 nanoseconds and write/erase times are about 0.1-ns. HP is still accumulating endurance cycle data at 10^12 cycles and the retention times are measured in years, he said.
This creates the prospect of adding dense non-volatile memory as an extra layer on top of logic circuitry. “We could offer 2-Gbytes of memory per core on the processor chip. Putting non-volatile memory on top of the logic chip will buy us twenty years of Moore’s Law, said Williams.
To put this in perspective Intel’s current flagship CPU ships with a total of 8MB of cache on the CPU and that’s shared between 4 cores. A similar memristor based CPU would have a whopping 8GB of on board cache, effectively negating the need for external DRAM. Couple this with a memristor based external drive for storage and you’d have a computer that’s literally decades ahead of the curve in terms of what we thought was possible, and Moore’s Law can rest easy for a while.
This kind of technology isn’t you’re usual pie in the sky “it’ll be available in the next 10 years” malarkey, this is the real deal. HP isn’t the only one looking into this either, Samsung (one of the world’s largest flash manufacturers) has also been aggressively pursuing this technology and will likely début products around the same time. For someone like me it’s immensely exciting as it shows that there are still many great technological advances ahead of us, just waiting to be uncovered and put into practice. I can’t wait to see how the first memristor devices perform as it will truly be a generational leap ahead in technology.
3D is one of those technologies that I’m both endlessly infatuated and frustrated with. Just over a year ago I saw Avatar in 3D and for me it was the first movie ever to use the technology in a way that wasn’t gimmicky but served as a tool to enable creative expression. Cameron’s work on getting the technology to the point where he could use it as such was something to be commended but what unfortunately followed was a long stream of movies jumping on the 3D bandwagon, hoping that it would be their ticket to Avatar like success. Since then I’ve only bothered to see one other movie in 3D (Tron: Legacy) as not one other movie demonstrated their use of 3D as anything other than following the fad and utterly failing to understand the art that is 3D.
Last year was the debut of consumer level 3D devices with the initial forays being the usual TVs and 3D enabled media players. Soon afterwards we began to see the introduction of some 3D capable cameras allowing the home user to create their very own 3D movies. Industry support for the format was way ahead of the curve with media sharing sites like YouTube allowing users to view 3D clips and video editing software supporting the format long before it hit the consumer level. We even had Nintendo announce that their next generation portable would be called the 3DS and boast a glasses free 3D screen at the top. Truly 3D had hit the mainstream as anyone and everyone jumped to get in on the latest technology craze.
Indeed the 3D trend has become so pervasive that even today as I strolled through some of my RSS reader backlog I came across not one, but two articles relating to upcoming 3D products. The first is set to be the world’s first 3D smartphone, the LG Optimus 3D. It boasts both a 3D capable camera and glasses free 3D screen along with the usual smartphone specs we’ve come to expect from high end Android devices. The second was that NVIDIA’s current roadmap shows that they’re planning to develop part of their Tegra line (for tablets) with built in 3D technology. Looking over all these products I can’t help but feel that there’s really little point to having 3D on consumer devices, especially portable ones like smartphones.
3D in cinemas makes quite a lot of sense, it’s another tool in the director’s kit to express themselves when creating their movie experience. On a handset or tablet you’re not really there to be immersed in something, you’re usually consuming small bits of information for short periods. Adding 3D into that experience really doesn’t enhance the experience at all, in fact I’d dare say that it would detract from it thanks to the depth of field placing objects in a virtual space that in reality is behind the hand that’s holding it. There is the possibility that 3D will enable a new kind of user interface that’s far more intuitive to the regular user than what’s currently available but I fail to see how the addition of depth of field to a hand held device will manage to accomplish that.
I could just be romanticising 3D technology as something best left to the creative types but if the current fad is anything to go by 3D is unfortunately more often misused as a cheap play to bilk consumers for a “better” experience. Sure some of the technology improvements of the recent past can trace their roots back to 3D (hello cheap 120Hz LCD screens) but for the most part 3D is just used as an excuse to charge more for the same experience. I’ve yet to see any convincing figures on how 3D products are doing out in the market but anecdotally it’s failed to gain traction amongst those who I know. Who knows maybe the LG Optimus 3D will turn out to be something really groovy but as far as I can tell now it’s simply yet another gimmick phone that’s attempting to cash in on the current industry obsession with 3D, just like every other 3D consumer product out there.