If there’s one thing that I can’t stand in any game it’s visual tearing and stuttering. This is the main reason why I play all my games with v-sync on as whilst I, like any gamer, enjoy the higher frame rates that come with turning it off it’s not long before I’m turning it back on again after the visual tearing wreaks havoc on my visual experience. Unfortunately this has the downside of requiring me to over-spec my machine to ensure 60 FPS at all times (something which I do anyway, but it doesn’t last forever) or lowering the visual quality of the game, something which no one wants. It’s been an issue for so long that I had given up on a fix for it although there was some hope with a 120Hz monitor. As it turns out there is hope and its name is G-SYNC.
The technology comes by way of NVIDIA and it’s a revolutionary way of having the GPU and your monitor work in tandem to remove tearing and stuttering. Traditionally when you’re operating a monitor like I am your graphics card has to wait for the monitor’s refresh interval every time it wants to write a frame to it. In a highly variable frame rate game (which is anything that’s graphically intensive) this leads to stuttering where repeated frames give the appearance of the game freezing up. Flipping v-sync off leads to the other problem where the GPU can write frames to the monitor whenever it wants. This means that a new frame can start being written halfway through a scan cycle which, if there’s even a skerrick of motion, leads to the frames being out of alignment causing visual tears. G-SYNC allows the GPU to dictate when the monitor should refresh, eliminating both these issues as every frame is synced perfectly.
For me this is basically monitor nirvana as it gives me the advantages of running v-sync without any of the drawbacks. Better still all the monitors that support G-SYNC also run up to 144Hz, something which was going to be a requirement for my next monitor purchase. The only drawback that I see currently is that all these high refresh rate monitors are TN panels which aren’t as great when compared to the shiny new IPS panels that have been flooding the market recently. Honestly though I’m more than willing to trade off the massive resolution and better colour reproduction for solving my main visual gripe that’s plagued me for the better part of 20 years.
Unfortunately your options for getting a G-SYNC capable monitor right now are fairly limited. Whilst there are a good number of monitors that were recently announced as supporting G-SYNC none of them have become commercially available yet, with all of them scheduled for release in Q2 2014. You can, if you’re so inclined, purchase an ASUS VG248QE and then hit up NVIDIA directly for a G-SYNC upgrade kit (currently out of stock) and upgrade your monitor yourself but it will require you to crack it open in order to do so. There are places that will do this for you though but they too are out of stock. Still for something like this I’m more than willing to wait and, hopefully, it will mean that other components of my new computer build will come down a touch, enough to justify the extra expenditure on these new fangled monitors.
As a poor student the last thing I wanted to pay for was software. Whilst the choice to pirate a base operating system is always questionable, it’s the foundation on which all your computing activities rely, it was either pay the high license cost or find an alternative. I’ve since found numerous, legitimate alternatives of course (thank you BizSpark) but not everyone is able to take advantage of them. Thus for many the choice to upgrade their copy of Windows typically comes with the purchase of a new computer, something which doesn’t happen as often as it used to. I believe that this is one factor that’s affected the Windows 8/8.1 adoption rates and it seems Microsoft might be willing to try something radical to change it.
Rumours have been making the rounds that Microsoft is potentially going to offer a low cost (or completely free) version of their operating system dubbed Windows 8.1 with Bing. Details as to what is and isn’t included are still somewhat scant but it seems like it will be a full version without any major strings attached. There’s even musings around some of Microsoft core applications, like Office, to be bundled in with the new version of Windows 8.1. This wouldn’t be unusual (they already do it with Office Core for the Surface) however it’s those consumer applications where Microsoft draws a lot of its revenue in this particular market segment so their inclusion would mean the revenue would have to be made up somewhere else.
Many are toting this release as being targeted mostly at Windows 7 users who are staving off making the switch to Windows 8. In terms of barriers to entry they are by far the lowest although they’re also the ones who have the least to gain from the upgrade. Depending on the timing of the release though this could also be a boon to those XP laggards who run out of support in just over a month. The transition from XP to Windows 8 is much more stark however, both in terms of technology and user experience, but there are numerous things Microsoft could do in order to smooth it over.
Whilst I like the idea there’s still the looming question of how Microsoft would monetize something like this as releasing something for free and making up the revenue elsewhere isn’t really their standard business model (at least not with Windows itself). The “With Bing” moniker seems to suggest that they’ll be relying heavily on browser based revenue, possibly by restricting users to only being able to use Internet Explorer. They’ve got into hot water for doing similar things in the past although they’d likely be able to argue that they no longer hold a monopoly on Internet connected devices like they once did. Regardless it will be interesting to see what the strategy is as the mere rumour of something like this is new territory for Microsoft.
It’s clear that Microsoft doesn’t want Windows 7 to become the next XP and is doing everything they can to make it attractive to get users to make the switch. They’re facing an uphill battle as there’s still a good 30% of Windows users who are still on XP, ones who are unlikely to change even in the face of imminent end of life. A free upgrade might be enough to coax some users across however Microsoft needs to start selling the transition from any of their previous version as a seamless affair, something that anyone can do on a lazy Sunday afternoon. Even then there will still be holdouts but at least it’d go a long way to pushing the other versions’ market share down into the single digits.
It will likely come as a shock to many to find out that Australia leads the world in terms of 4G speeds, edging out many other countries by a very healthy margin. As someone who’s a regular user of 4G for both business and pleasure I can attest to the fact that the speeds are phenomenal with many of the CBD areas around Australia giving me 10~20Mbps on a regular basis. However the speeds have notably degenerated over time as back in the early days it wasn’t unheard of to get double those speeds, even if you were on the fringes of reception. The primary factor in this is an increased user base and thus as the network becomes more loaded the bandwidth available to everyone starts to turn south.
There’s 2 factors at work here, both of which influence the amount of bandwidth that a device will be able to use. The primary one is the size of the backhaul pipe on the tower as that is the hard limit on how much traffic can pass through a particular end point. The second, and arguably just as important, factor is the number of devices vs the number of antennas on the base station as this will determine how much of the backhaul speed can be delivered to a specific device. This is what I believe has been mostly responsible for the reduction in 4G speeds I’ve experienced but according to the engineers at Artemis, a new communications start up founded by Steve Perlman (the guy behind the now defuct OnLive), that might not be the case forever.
Artemis new system hopes to solve the latter part of the equation not by eliminating signal interference, that’s by definition impossible, but instead wants to utilize it in order to create pCells (personal cells) that are unique to each and every device that’s present on their network. According to Perlman this would allow an unlimited number of devices to coexist in the same area and yet still receive the same amount of signal and bandwidth as if they were on it all by themselves. Whilst he hasn’t divulged exactly how this is done yet he has revealed enough for us to get a good idea about how it functions and I have to say it’s quite impressive.
So the base stations you see in the above picture are only a small part of the equation, indeed from what I’ve read they’re not much different to a traditional base station under the hood. The magic comes in the form of the calculations that are done prior to the signal being sent out as instead of blindly broadcasting (like current cell towers do) they instead use your, and everyone else who is connected to the local pCell network, location to determine how the signals be sent out. This then manifests as a signal that’s coherent only at the location of your handset giving you the full amount of signal bandwidth regardless of how many other devices are nearby.
I did enough communications and signal processing at university to know something like this is possible (indeed it’s a similar kind of technology that powers “sound lasers”) and could well work in practice. The challenges facing this technology are many but from a technical standpoint there are 2 major ones I can see. Firstly it doesn’t solve the backhaul bandwidth issue meaning that there’s still an upper limit on how much data can be passed through a tower, regardless of how good the signal is. For a place like Australia this would be easily solved by implementing a full fibre network which, unfortunately, seems to be off the cards currently. The second problem is more nuanced and has to do with the calculations required and the potential impacts that might have on the network.
Creating these kinds of signals, ones that are only coherent at a specific location, requires a fair bit of back end calculations to occur prior to being able to send the signal out. The more devices you have in any particular area the more challenging this becomes and the longer that this will take to calculate before the signal can be generated. This has the potential to introduce signal lag into the network, something that might be somewhat tolerable from a data perspective but is intolerable when it comes to voice transmission. To their credit Artemis acknowledges this challenge and has stated that their system can do up to 100 devices currently so it will be very interesting to see if it can scale out like they believe it can.
Of course this all hinges on the incumbent cellular providers getting on board with this technology, something which a few have already said their aware of but haven’t gone much further than that. If it works as advertised then it’s definitely a disruptive technology, one that I believe should be adopted everywhere, but large companies tend to shy away from things like this which could strongly hamper adoption. Still this tech could have wide reaching applications outside the mobile arena as things like municpal wireless could also use it to their advantage. Whether it will see application there, or anywhere for that matter, will be something to watch out for.
Most aircraft capable of Short Take -Offs and Landings (STOL) are usually small and nimble kinds of planes, usually being either designed for use in adverse conditions or, more famously, fighter jets that find their homes on aircraft carriers. The reasons for this are pretty simple: the larger you the make the aircraft the more power you require to shorten its take off and past a certain point regular old jet engines simply aren’t going to cut it any more. However there have been a few notable examples of large aircraft using JATO rockets to drastically shorten their take off profile and the most notable of which is the Blue Angels’ C-130 dubbed Fat Albert:
If you’ve ever seen one of these beasts take off in person (or even say, an Airbus A380 which is a monster by comparison) then you’ll know that they seem to take forever to get off the ground. Strapping 8 JATOs that produce 1000lbs of thrust to the back of them makes a C-130 look a fighter jet when its taking off, gaining altitude at a rate that just seems out of this world. Of course this then begs the question of why you’d want to do something like this as it’s not often that a C-130 or any of its brethren find themselves in a situation where taking off that quickly would be necessary.
In truth it isn’t as the missions that these large craft fly are typically built around their requirements for a long runway. There have been some notable examples though with the most recent being the Iranian Host Crisis that occurred over 30 years ago. After the failure of a first rescue attempt the Pentagon set about creating another mission in order to rescue the hostages. The previous mission failure was largely blamed on the use of a large number of heavy lift helicopters, many of which didn’t arrive in operational condition. The thinking was to replace those helicopters with a single C-130 that was modified to land in a nearby sports stadium for evacuation of the extraction teams and the hostages.
The mission was called Operation Credible Sport and was tasked with modifying 2 C-130 craft to be capable of landing in a tight space. They accomplished this by the use of no less than 30 JATO rockets: 8 facing backward (for take off), 8 facing forward (for breaking on landing), 8 pointed downwards (to slow the descent), 4 on the wings and 2 on the tail. The initial flight test showed that the newly modified C-130 was capable of performing the take-off in the required space however on landing the 8 downward facing rockets failed to fire and, in combination with one of the pilots accidentally triggering the breaking rockets early, the craft met its tragic demise thankfully without out injury to any of the crew.
Even Fat Albert doesn’t do JATO runs any more as a shortage of the required rocketry spelled an end to it in 2009. It’s a bit of a shame as it’s a pretty incredible display but considering it had no practical use whatsoever I can see why they discontinued it. Still the videos of it are impressive enough, at least for me anyway.
Growing up in a rural area meant that my Internet experience was always going to be below that of my city living counterparts. This wasn’t much of an issue for a while as dial-up was pretty much all you could hope for anywhere in Australia however the advent of broadband changed this significantly. From then on the disparity in Internet accessibility was pretty clear and the gap only grew as time went on. This didn’t seem to change much after I moved into the city either, always seeming to luck out with places that connected at speeds far below the advertised maximum that our current gen ADSL lines were capable of. Worst still they almost always seemed to be at the mercy of the weather with adverse conditions dropping speeds or disconnecting us from the Internet completely.
My current place of residence never got great speeds, topping out at 6Mbps and only managing to sustain that connection for a couple hours before falling over. I can expect to get a pretty stable 4Mbps connection most of the time however the last few days have seen Canberra get a nice amount of rain and the speeds I was able to get barely tickled 1Mbps no matter how many times I reconnected, reset my modem or shouted incoherently at the sky. It was obvious then that my situation was caused by the incumbent weather, filling my local Telstra pit with water which sent the signal to noise ratio into the ground. Usually this is something I’d just take on the chin but this situation was meant to be improved by now if it wasn’t for the current government.
Prior to the election my area was scheduled to start construction in October last year however it became one of the areas that disappeared off NBNco’s deployment map shortly after the Abbot government came into power. This meant I would then come under their revised plan to bring in FTTN through VDSL which has the unfortunate consequence of leaving me on the known-bad infrastructure in my street. So my speeds might improve but it’d be unlikely that I’d get “at least” 20Mbps and I could guarantee that every time it rained I’d be in for another bout of tragic Internet speeds, if I could connect to it at all.
The big issue with the Liberal’s NBN plan is that my situation is by no means unique and indeed quite typical thanks to the aging infrastructure that is commonplace throughout much of Australia. Indeed the only place that I know gets speeds as advertised for their cable run are my parents who still live in a rural area. The reason for this is because the copper is new out there and is quite capable of carrying the higher speeds. My infrastructure on the other hand, in a place where you’d expect it to be regularly maintained, doesn’t hold a candle to theirs and will continue to suffer from issues after we get “upgraded”.
A full FTTP NBN on the other hand would eliminate these issues providing ubiquitous access that’s, above all, dependable and reliable. The copper mile last run that the majority of Australia will end up using as part of the Liberal’s NBN just can’t provide that, not without significant remediation which neither Telstra nor the government has any interest in doing. Hopefully the Liberal government wakes up and realises this before we get too far down the FTTN hole as it’s been shown that the majority of Australian’s want the FTTP NBN and they’re more than willing to pay for it.
It doesn’t seem that long ago that Felix Baumgartner leapt from his balloon 39KMs above the Earth’s surface, breaking Joseph Kittinger’s long standing record. The whole journey took only minutes and the entire journey back down captivated millions of people who watched on with bated breath. Curiously though we only saw one perspective of it for a long time, that of the observation cameras chronicling Felix’s journey. Now we can have front row seats to what Felix himself saw on the way down, including the harrowing spin that threatened to end everything in tragedy.
Cryptocurrencies and I have a sordid history. It began with me comparing BitCoin to a pyramid scheme, pointing out the issues that were obvious to many casual observers and receiving some good feedback in the process. Over time I became more comfortable with the idea, although still lamenting the volatility and obvious market speculation, and would go as far to say I was an advocate for it, wanting it to succeed in its endeavours. Then I met the community, filled with outright hostile individuals who couldn’t tolerate any criticism and acted like they were the victim of the oppressive government regime. I decided then that I wouldn’t bother blogging about BitCoin as much as I had done previously as I was just sick of the community that had grown around it.
Then came Dogecoin.
Dogecoin, for the uninitiated, is a scrypt based cryptocurrency (meaning that it’s a memory-hard based currency, so the ASICs and other mining hardware that BitCoiners have invested in is useless for mining it) which bears the mark of the Internet meme Doge. The community that sprung up around it is the antithesis of what the BitCoin community has become, with every toxic behaviour lampooned and everyone encouraged to have fun with the idea. Indeed getting into Dogecoin is incredibly simple with tons of guides and dozens of users ready and willing to help you out should you need it. Even if you don’t have the hardware to mine at a decent rate you can still find yourself in possession of hundreds, if not thousands, of Dogecoins in a matter of minutes from any number of the facet services. This has led to a community of people who aren’t the technically elite or those looking to profit, something which I believe led to the other cryptocurrency communities to become so toxic.
I myself hold about 20,000 Doge after spending about a week’s worth of nights mining on my now 3 year old system. Whilst I haven’t done much more than that it was far, far more than I had ever thought about doing with any other cryptocurrency. My friends are also much more willing to talk to me about Dogecoin than Bitcoin with a few even going as far to mine a few to fool around with on Reddit. Whether they will ever be worth anything doesn’t really factor into the equation but even with their fraction of a penny value at the moment there’s still been some incredible stories of people making things happen using them.
For most of its life though the structural issues that plagued BitCoin where also inherent in Dogecoin, albeit in a much less severe manner. The initial disparity between early adopters and the unwashed masses is quite a lot smaller due to Dogecoins initial virility but there was still a supposed limit of 100 billion coins which still made it deflationary. However the limit wasn’t actually enforced and thus, in its initial incarnation, Dogecoin was inflationary and a debate erupted as to what was going to be done. Today Dogecoin’s creator made a decision and he elected to keep it that way.
One of my biggest arguments against BitCoin was its deflationary nature, not because it’s not inflationary or whatever argument people think I have against it, more that the deflationary nature of BitCoin encouraged speculation and hoarding rather than spending. Whilst the inflation at this point is probably a little too high (I.E. the price instability is mostly due to new coin creation than much else) it does prevent people attempting to use Dogecoin as a speculative investment vehicle. Indeed the reaction from a lot of those who don’t “get” Dogecoin have been lamenting this change but in all honesty this is the best decision that could be made and shows the Dogecoin creators understand the larger (non-technical) issues that plague BitCoin.
Will this mean that Dogecoin will become the cryptocurrency of choice? Likely not as with most of these nascent technologies they’ll likely be superseded by something better that addresses all the issues whilst bringing new features that the old systems simply cannot support. Still the fact that there has been an explosion in altcoins shows that there’s a market out there for cryptocurrencies with feature sets outside of what BitCoin provides. Whether they win out all depends on where the market wants to head.
The story of AMD’s rise to glory on the back of Intel’s failures is well known. Intel, filled with the hubris that can only come from maintaining a dominate market position as long as they had, thought that the world could be brought into the 64bit world on the back of their brand new platform: Itanium. The cost for adopting this platform was high however as it made no attempts to be backwards compatible, forcing you to revamp your entire software stack to take advantage of it (the benefits of which were highly questionable). AMD, seeing the writing on the wall, instead developed their x86-64 architecture which not only promised 64bit compatibility but even went as far as to outclass then current generation Intel processors in 32bit performance. It was then an uphill battle for Intel to play catchup with AMD but the past few years have seen Intel dominate AMD in almost every metric with the one exception of performance per dollar at the low end.
That could be set to change however with AMD announcing their new processors, dubbed Kaveri:
On the surface Kaveri doesn’t seem too different from the regular processors you’ll see on the market today, sporting an on-die graphics card alongside the core compute units. As the above picture shows however the amount of on die space dedicated to said GPU is far more than any other chip currently on the market and indeed the transistor count, which is a cool 2.1 billion, is a testament to this. After that however it starts to look more and more like a traditional quad core CPU with an integrated graphics chip, something few would get excited about, but the real power of AMD’s new Kaveri chips comes from the architectural changes that underpin this insanely complex piece of silicon.
The integration of GPUs onto CPUs has been the standard for some years now with 90% of chips being shipped with an on-die graphics processor. For all intents and purposes the distinction between them and discrete units are their location within the computer as they’re essentially identical at the functional level. There is some advantages gained due to being so close to the CPU (usually to do with latency that’s eliminated by not having to communicate over the PCIe bus) but they’re still typically inferior due to the amount of die space that can be dedicated to them. This was especially true of generations previous to the current one which weren’t much better than the integrated graphics cards that shipped with many motherboards.
Kaveri, however, brings with it something that no other CPU has managed before: a unified memory architecture.
Under the hood under every computer is a whole cornucopia of different styles of memory, each with their own specific purpose. Traditionally the GPU and CPU would each have their own discrete pieces of memory, the CPU with its own pool of RAM (which is typically what people refer to) and the GPU with similar. Integrated graphics would typically take advantage of the system RAM, reserving part a section for its own use. In Kaveri the distinction between the CPU’s and GPUs memory is gone, replaced by a unified view where either processing unit is able to access the others. This might not sound particularly impressive but it’s by far one of the biggest changes to come to computing in recent memory and AMD is undoubtedly the pioneer in this realm.
GPUs power comes from their ability to rapidly process highly parallelizable tasks, examples being things like rendering or number crunching. Traditionally however they’re constrained by how fast they can talk with the more general purpose CPU which is responsible for giving it tasks and interpreting the results. Such activities usually involve costly copy operations that flow through slow interconnects in your PC, drastically reducing the effectiveness of a GPU’s power. Kaveri CPUs on the other hand suffer from no such limitations allowing for seamless communication between the GPU and the CPU enabling them both to perform tasks and share results without the traditional overhead.
The one caveat at this point however is that software needs to be explicitly coded to take advantage of this unified architecture. AMD is working extremely hard to get low level tools to support this, meaning that programs should eventually be able to take advantage of it without much hassle, however it does mean that the Kaveri hardware is arriving long before the software will be able to take advantage of it. It’s sounding a lot like an Itanium moment here, for sure, but as long as AMD holds good to their promises of working with tools developers to take advantage of this (whilst retaining the required backwards compatibility) this has the potential to be another coup for AMD.
If the results from the commercial units are anything to go by then Kaveri looks very promising. Sure it’s not a performance powerhouse but it certainly holds its own against the competition and I’m sure once the tools catch up you’ll start to see benchmarks demonstrating the power of a unified memory architecture. That may be a year or two out from now but rest assured this is likely the future for computing and every other chip manufacturer in the world will be rushing to replicate what AMD has created here.
Ever since Microsoft and Nokia announced their partnership with (and subsequent acquisition by) Microsoft I had wondered when we’d start seeing a bevy of feature phones that were running the Windows Phone operating system behind the scenes. Sure there’s a lot of cheaper Lumias on the market, like the Lumia 520 can be had for $149 outright, but there isn’t anything in the low end where Nokia has been the undisputed king for decades. That section of the market is now dominated by Nokia’s Asha line of handsets, a curious new operating system that came into being shortly after Nokia canned all development on Symbian and their other alternative mobile platforms. However there’s long been rumours circling that Nokia was developing a low end Android handset to take over this area of the market, predominately due to the rise of cheap Android handsets that were beginning to trickle in.
The latest leaks from engineers within Nokia appear to confirm these rumours with the above pictures showcasing a prototype handset developed under the Normandy code name. Details are scant as to what the phone actually consists of but the notification bar in does look distinctly Android with the rest of the UI not bearing any resemblance to anything else on the market currently. This fits in with the rumours that Nokia was looking to fork Android and make its own version of it, much like Amazon did for the Kindle Fire, which would also mean that they’d likely be looking to create their own app store as well. This would be where Microsoft could have its in, pushing Android versions of its Windows Phone applications through its own distribution channel without having to seek Google’s approval.
Such a plan almost wholly relies on the fact that Nokia is the trusted name in the low end space, managing to command a sizable chunk of the market even in the face of numerous rivals. Even though Windows Phone has been gaining ground recently in developed markets it’s still been unable to gain much traction in emerging markets. Using Android as a trojan horse to get uses onto their app ecosystem could potentially work however it’s far more likely that those users will simply remain on the new Android platform. Still there would be a non-zero number who would eventually look towards moving upwards in terms of functionality and when it comes to Nokia there’s only one platform to choose from.
Of course this all hinges on the idea that Microsoft is actively interested in pursuing this idea and it’s not simply part of the ongoing skunk works of Nokia employees. That being said Microsoft already makes a large chunk of change from every Android phone sold thanks to its licensing arrangements with numerous vendors so they would have a slight edge in creating a low end Android handset. Whether they eventually use that to try and leverage users onto the Windows Phone platform though will be something that we’ll have to wait to see as I can imagine it’ll be a long time before an actual device sees the light of day.
There are few things I find more enjoyable than putting together a new PC. It starts off with the chase where I determine my budget and then start chasing down the various components that will make up the final system. Then comes the verification where I trawl through dozens upon dozens of reviews to ensure the I’ve selected only the best products for their price bracket. Finally the time comes when I purchase all the components, hopefully from a single vendor with price matching, and then after the components arrive I’ll begin the immensely enjoyable task of assembling my (or someone else’s) new PC. Nothing quite beats the feeling of seeing Windows boot up for the first time on a new bit of hardware you just finished building.
Of course I realise that the vast majority of the world doesn’t enjoy engaging in such activities, especially if all you’re doing with your PC is watching movies or doing the occasional bit of word processing, and this is typically when I’ll send them to any one of a number of PC manufacturers who can give them a solid device with a long warranty. My gamer buddies will typically get me to validate their builds and, if they don’t feel up to the task, get me to build it or simply stick to consoles which provide a pretty good experience for much of their useful life. This is why I think Razer’s Project Christine is trying to target a market that just doesn’t exist as it sits in between already well defined market segments that are both already well serviced.
Project Christine is, as a concept, a pretty interesting idea. All the core components that make up a PC (RAM, storage, graphics card, etc.) have been modularized allowing almost anyone to build up a custom PC of their liking without the requisite PC building experience. The design is somewhat reminiscent of the Thermaltake Level 10 which used the compartmentalization of different parts to improve the cooling as well as to make maintenance easier. Razer’s concept takes this idea to the extreme, effectively commoditizing some of the skills required to build a high end gaming PC whilst still retaining the same issues around configuration, like knowing which components are the best bang for buck at the time.
Razer could potentially head off that second issue by going ahead with their subscription based model for upgraded parts. The idea would be that after you’ve bought whatever model you wanted (this service appears to be targeted to the high end) then you pay a monthly subscription feed to get the latest and greatest parts delivered to you. For the ultimate in hardcore gamers this could be somewhat attractive however it’d likely be an extremely expensive service to opt in to as the latest PC components are rarely among the cheapest or best value. Still if you’ve got a lot of money and not a whole lot of time then it could be of use to you except the fact that you’ve invested so much in a gaming rig typically means you have enough time to make use of it.
This is where I feel Project Christine falls down as the target market is a demographic of people who are interested in configuring their computer right up to the point of physically building it. Whilst I don’t really have any facts to back up this next assertion it has been my experience that people of this nature are either already well serviced by custom build services (which most PC shops provide) or know someone with the capabilities to do it. Sure the modular nature of the Christine is pretty awesome, and it certainly makes a striking impression, however that also means you need to wait for Razer to Christine-ify parts before they’ll be available to you. You might be able to crack them open and do the upgrade yourself but then you’re really only one step away from doing a full PC build anyway.
With consoles and PCs lasting longer and longer as time goes by concepts like Project Christine seem to be rooted in the past idea that a gaming PC needed constant upgrades to remain viable. That simply hasn’t been the case for the better part of a decade and whilst the next generation of consoles might spur an initial burst in PC upgrades it’s doubtful that the constant upgrade cycle will ever return. Project Christine might find itself with a dedicated niche of users but I really don’t believe it will be large enough to be sustainable, even with the Razer name behind it.