There are few things I find more enjoyable than putting together a new PC. It starts off with the chase where I determine my budget and then start chasing down the various components that will make up the final system. Then comes the verification where I trawl through dozens upon dozens of reviews to ensure the I’ve selected only the best products for their price bracket. Finally the time comes when I purchase all the components, hopefully from a single vendor with price matching, and then after the components arrive I’ll begin the immensely enjoyable task of assembling my (or someone else’s) new PC. Nothing quite beats the feeling of seeing Windows boot up for the first time on a new bit of hardware you just finished building.
Of course I realise that the vast majority of the world doesn’t enjoy engaging in such activities, especially if all you’re doing with your PC is watching movies or doing the occasional bit of word processing, and this is typically when I’ll send them to any one of a number of PC manufacturers who can give them a solid device with a long warranty. My gamer buddies will typically get me to validate their builds and, if they don’t feel up to the task, get me to build it or simply stick to consoles which provide a pretty good experience for much of their useful life. This is why I think Razer’s Project Christine is trying to target a market that just doesn’t exist as it sits in between already well defined market segments that are both already well serviced.
Project Christine is, as a concept, a pretty interesting idea. All the core components that make up a PC (RAM, storage, graphics card, etc.) have been modularized allowing almost anyone to build up a custom PC of their liking without the requisite PC building experience. The design is somewhat reminiscent of the Thermaltake Level 10 which used the compartmentalization of different parts to improve the cooling as well as to make maintenance easier. Razer’s concept takes this idea to the extreme, effectively commoditizing some of the skills required to build a high end gaming PC whilst still retaining the same issues around configuration, like knowing which components are the best bang for buck at the time.
Razer could potentially head off that second issue by going ahead with their subscription based model for upgraded parts. The idea would be that after you’ve bought whatever model you wanted (this service appears to be targeted to the high end) then you pay a monthly subscription feed to get the latest and greatest parts delivered to you. For the ultimate in hardcore gamers this could be somewhat attractive however it’d likely be an extremely expensive service to opt in to as the latest PC components are rarely among the cheapest or best value. Still if you’ve got a lot of money and not a whole lot of time then it could be of use to you except the fact that you’ve invested so much in a gaming rig typically means you have enough time to make use of it.
This is where I feel Project Christine falls down as the target market is a demographic of people who are interested in configuring their computer right up to the point of physically building it. Whilst I don’t really have any facts to back up this next assertion it has been my experience that people of this nature are either already well serviced by custom build services (which most PC shops provide) or know someone with the capabilities to do it. Sure the modular nature of the Christine is pretty awesome, and it certainly makes a striking impression, however that also means you need to wait for Razer to Christine-ify parts before they’ll be available to you. You might be able to crack them open and do the upgrade yourself but then you’re really only one step away from doing a full PC build anyway.
With consoles and PCs lasting longer and longer as time goes by concepts like Project Christine seem to be rooted in the past idea that a gaming PC needed constant upgrades to remain viable. That simply hasn’t been the case for the better part of a decade and whilst the next generation of consoles might spur an initial burst in PC upgrades it’s doubtful that the constant upgrade cycle will ever return. Project Christine might find itself with a dedicated niche of users but I really don’t believe it will be large enough to be sustainable, even with the Razer name behind it.
The computer (or whatever Internet capable device you happen to be viewing this on) is made up of various electronic components. For the most part these are semiconductors, devices which allow the flow of electricity but don’t do it readily, but there’s also a lot of supporting electronics that are what we call fundamental components of electronics. As almost any electrical enthusiast will tell you there are 3 such components: the resistor, capacitor and inductor each of them with their own set of properties that makes them useful in electronic circuits. There’s been speculation of a 4th fundamental component for about 40 years but before I talk about that I’ll need to give you a quick run down on what the current fundamentals properties are.
The resistor is the simplest of the lot, all it does is impede the flow of electricity. They’re quite simple devices, usually a small brown package banded by 4 or more colours which denotes just how resistive it actually is. Resistors are often used as current limiters as the amount of current that can pass through them is directly related to the voltage and level of resistance of said resistor. In essence you can think of them as narrow pathways in which electric current has to squeeze through.
Capacitors are intriguing little devices and can be best thought of as batteries. You’ve seen them if you’ve taken apart any modern device as they’re those little canister looking things attached to the main board of said device. They work by storing charge in an electrostatic field between two metal plates that’s separated by an insulating material called a dielectric. Modern day capacitors are essentially two metal plates and the dielectric rolled up into a cylinder, something which you could see if you cut one open. I’d only recommend doing this with a “solid” capacitor as the dielectrics used in other capacitors are liquids and tend to be rather toxic and/or corrosive.
Inductors are very similar to capacitors in the respect that they also store charge but instead of an electrostatic field they store it in a magnetic field. Again you’ve probably seen them if you’ve cracked open any modern device (or say looked inside your computer) as they look like little circles of metal with wire coiled around them. They’re often referred to as “chokes” as they tend to oppose the current that induces the magnetic field within them and at high frequencies they’ll appear as a break in the circuit, useful if you’re trying to keep alternating current out of your circuit.
For quite a long time these 3 components formed the basis of all electrical theory and nearly any component could be expressed in terms of them. However back in 1971 Leon Chua explored the symmetry between these fundamental components and inferred that there should be a 4th fundamental component, the Memristor. The name is a combination of memory and resistor and Chua stated that this component would not only have the ability to remember its resistance, but also have it changed by passing current through it. Passing current in one direction would increase the resistance and reversing it would decrease it. The implications of such a component would be huge but it wasn’t until 37 years later that the first memristor was created by researchers in HP’s lab division.
What’s really exciting about the memristor is its potential to replace other solid state storage technologies like Flash and DRAM. Due to memristor’s simplicity they are innately fast and, best of all, they can be integrated directly onto the chip of processors. If you look at the breakdown of a current generation processor you’ll notice that a good portion of the silicone used is dedicated to cache, or onboard memory. Memristors have the potential to boost the amount of onboard memory to extraordinary levels, and HP believes they’ll be doing that in just 18 months:
Williams compared HP’s resistive RAM technology against flash and claimed to meet or exceed the performance of flash memory in all categories. Read times are less than 10 nanoseconds and write/erase times are about 0.1-ns. HP is still accumulating endurance cycle data at 10^12 cycles and the retention times are measured in years, he said.
This creates the prospect of adding dense non-volatile memory as an extra layer on top of logic circuitry. “We could offer 2-Gbytes of memory per core on the processor chip. Putting non-volatile memory on top of the logic chip will buy us twenty years of Moore’s Law, said Williams.
To put this in perspective Intel’s current flagship CPU ships with a total of 8MB of cache on the CPU and that’s shared between 4 cores. A similar memristor based CPU would have a whopping 8GB of on board cache, effectively negating the need for external DRAM. Couple this with a memristor based external drive for storage and you’d have a computer that’s literally decades ahead of the curve in terms of what we thought was possible, and Moore’s Law can rest easy for a while.
This kind of technology isn’t you’re usual pie in the sky “it’ll be available in the next 10 years” malarkey, this is the real deal. HP isn’t the only one looking into this either, Samsung (one of the world’s largest flash manufacturers) has also been aggressively pursuing this technology and will likely début products around the same time. For someone like me it’s immensely exciting as it shows that there are still many great technological advances ahead of us, just waiting to be uncovered and put into practice. I can’t wait to see how the first memristor devices perform as it will truly be a generational leap ahead in technology.
My main PC at home is starting to get a little long in the tooth, having been ordered back in the middle of 2008 and only receiving upgrades of a graphics card and a hard drive since then. Like all PCs I’ve had it suffered a myriad of problems that I just usually put up with until I stumbled across a work around, but I think the vast majority of them can be traced to a faulty motherboard (Can’t put more than 4GB of RAM in it or it won’t post) and a batch of faulty hard drives (that would randomly park the heads causing it to freeze). At the time I had the wonderful idea of buying the absolute latest so I could upgrade cheaply for the next few years, but thanks to the consolization of games I found that wasn’t really necessary.
To be honest it’s not even really necessary now either, with all the latest games still running at full resolution and most at high settings to boot. I am starting to lag on the technology front however with my graphics card not supporting DirectX 11 and everything but the RAM being 2 generations behind (yes, I have a Core 2 Duo). So I took it upon myself to build a rig that combined the best performance available of the day rather than trying to focus on future compatibility. Luckily for me it looks like those two are coinciding.
Just because like any good geek I love talking shop when it comes to building new PCs here are the specs of the potential beast in making:
The first couple choices I made for this rig were easy. Hands down the best performance out there is with the new Sandy Bridge i7 chips with the 2600K being the top of the lot thanks to its unlocked multiplier and hyperthreading, which chips below the 2600 lack. The choice of graphics cards was a little harder as whilst the Radeon comes out leagues ahead on a price to performance ratio the NVIDIA cards still had a slight performance lead overall, but hardly enough to justify the price. Knowing that I wanted to take advantage of the new SATA 6Gbps range of drives that were coming out my motherboard choice was almost made for me as the Asrock P67 seems to be one of the few that has more than 4 of the ports available (it has 6, in fact).
The choice of SSD however, whilst extremely easy at the time, became more complicated recently.
You see back in the initial pre-production review round the OCZ Vertex 3 came out shooting, blasting away all the competition in a seemingly unfair comparison to its predecessors. I was instantly sold especially considering the price was looking to be quite reasonable, around the $300 mark for a 120GB drive. Sure I could opt for the bigger drive and dump my most frequently played games on it but in reality a RAID10 array of SATA 6Gbps drives should be close enough without having to overspend on the SSD. Like any pre-production reviews I made sure to keep my ear to the ground just in case something changed once they started churning them out.
Of course, something did.
The first production review that grabbed my attention was from AnandTech, renowned for their deep understanding of SSDs and producing honest and accurate reviews. The results for my drive size of choice, the 120GB, were decidedly mixed on a few levels with it falling down in several places where the 240GB version didn’t suffer any such problems. Another review confirmed the figures were in the right ballpark although unfortunately lacking a comparison to the 240GB version. The reasons behind the performance discrepancies are simple, whilst functionally the same drives the differences come from the number of NAND chips used to create the drive. The 240GB version has double the amount of the 120GB version which allows for higher throughput and additionally grants the drive a larger scratch space that it can use to optimize its performance¹.
So of course I started to rethink my position. The main reason for getting a real SSD over something like the PCIe bound RevoDrive was that I could use it down the line as a jumbo flash drive if I wanted to and I wouldn’t have to sacrifice one of my PCIe lanes to use it. The obvious competitor to the OCZ Vertex 3 would be something like the Intel 510 SSD but the reviews haven’t been very kind to this device, putting it barely in competition with previous generation devices.
After considering all my options I think I’ll still end up going with the OCZ Vertex 3 at the 120GB size. Whilst it might not be the kind of performance in every category it does provide tremendous value when compared to a lot of other SSDs and it will be in another league when compared to my current spinning rust hard drive. Once I get around to putting this new rig together you can rest assured I’ll put the whole thing through its paces, if at the very least to see how the OCZ Vertex 3 stacks up against the numbers that have already been presented.
¹Ever wondered why some SSDs are odd sizes? They are in fact good old fashioned binary sizes (128GB and 256GB respectively) however the drive reserves a portion of that (8GB and 16GB) to use as scratch space to write and optimize data before committing it. Some drives also use it as a buffer for when flash cells become unwritable (flash cells don’t usually die, you just can’t write to them anymore) so that the drive’s capacity doesn’t degrade.