Technology

tesla-motors-p85d

Tesla Gives us the D: Dual Motors and Autopilot.

The Tesla Model S as we know it today is quite an impressive car. Whilst it’s not exactly within the everyman’s price range yet (getting one landed in Australia likely won’t see much change from $100K) it’s gone a long way to making a high performing electric vehicle available to the masses, especially considering Tesla stance on their patents. Before that electric cars were more of a niche product for the ultra environmentally conscious, combining tiny engines with small frames that would have just enough power to get you to work and back. Now they’re far more easily compared to high end luxury cars and with the new things that Elon announced last week electric cars are heading into a class all of their own.

tesla-motors-p85d

Elon teased last week that he was going to unveil the D soon (and seemingly forgot how much of a dirty mind the entire Internet has) and “something else”. The D was for their new drive train system that incorporates 2 motors, making the Tesla Model S one of the few fully electric all wheel drive cars. The something else turned out to be the debut of their autopilot system, a sort of cut down version of the Google self-driving car. Whilst the D version of the Model S won’t be available for another couple months (although you can order one today) all Model S cars built within the last couple weeks shipped with the autopilot hardware. Suffice to say both these announcements are pretty exciting although the latter probably more so.

The dual motors is an interesting upgrade for the Model S as it’s a pretty common feature among higher end luxury cars, something which it has been lacking. Of particular note is how the dual motor upgrade affects the various aspects of the car, like slashing 0.8 seconds off the 0-100 time (3.2 seconds) and increasing range by about 3.5%, all whilst granting the benefits that all wheel drive provides. Typically you’d be taking a decent hit to range and efficiency due to the increased weight and power requirements but the Model S has managed to come out on top in all respects. Should those figures hold up in real world testing then it’ll speak volumes to the engineering team that Tesla has managed to cultivate.

However the most interesting part for me was the debut of Tesla’s autopilot system. Elon Musk had always been of the mind that a self driving car didn’t need to be an all encompassing thing, instead they should aim to do the majority of tasks first before looking to take the next leap into full automation. Tesla’s autopilot system is the embodiment of that philosophy, taking some of the technology that’s currently available (emergency braking, lane keeping, collision avoidance) and combining it into one seamless package. It won’t get you from point A to point B without human intervention but it’ll happy take over on the highway, park itself in the garage and even meet you at a certain location. It might not be as comprehensive at what Google is looking to create but it’s available today and does almost everything you’d need it to.

I really shouldn’t be surprised that a Musk created company is managing to innovate so quickly in an industry that has long been one of the slowest movers but honestly these two announcements blew me away. The dual motors might not exactly be a revolutionary piece of technology but the way Telsa has done it speaks volumes to the calibre of people that they have working there. The introduction of autopilot in just over a year since they first talked about it really is quite amazing and whilst it might not be the all encompassing system that Google is seeking it will likely be the standard for many years to come. I can’t wait to see what Tesla has in store for us next as they don’t seem to have any intentions of stopping their brisk innovating pace any time soon.

HP_Halves

HP Splits in Two: Hewlett-Packard Enterprise and HP Inc.

The time has long since past when a computer manufacturer could get by on shipping tin. The margins on computer equipment are so low that, most of the time, the equipment they sell is just a loss leader for another part of the business. Nowadays the vast majority of most large computer company’s revenue comes from their services division, usually under the guise of providing the customer a holistic solution rather than just another piece of tin. Thus for many companies the past couple decades have seen them transform from pure hardware businesses into more services focused companies, with several attempting more radical transformations in order to stay relevant. HP has become the most recent company to do this, announcing that they will be splitting the company in half.

HP_Halves

HP will now divest itself into 2 different companies. The first will be Hewlett Packard Enterprise comprising of their server market, services branch and software group. The second will be purely consumer focused, comprising of their personal computer business and their printing branch. If you were going to split a PC business this is pretty much how you’d do it as whilst these functions are somewhat complimentary to each other (especially if you want to be the “end to end” supplier for all things computing) there’s just as many times when they’re at odds. HP’s overarching strategy with this split is to have two companies that can be more agile and innovative in their respective markets and, hopefully, see better margins because of it.

When I first heard the rumours swirling about this potential split the first question that popped into my head was “Where is the services business going?”. As I alluded to before the services business is the money maker for pretty much every large PC manufacturer these days and in this case the enterprise part of HP has come away with it. The numbers only give a slight lead to the new enterprise business in terms of revenue and profit however with the hardware business has been on a slow decline for the past few years which, if I’m honest, paints a bleak picture for HP Inc. going forward. There’s nothing to stop them from developing a services capability (indeed parts of the consumer business already have that) however in its current form I’d put my money on HP Inc. being the one who’s worse off out of this deal.

That could change however if HP’s rhetoric has some merit to it. HP, as it stands today, is an amalgamation of dozens of large companies that it acquired over the years and whilst they all had a similar core theme of being in the IT business there really wasn’t a driving overarching goal for them to adhere to. The split gives them an opportunity to define that more clearly for each of the respective companies, allowing them to more clearly define their mission within each of their designated market segments. Whether that will translate into the innovation and agility that they’re seeking is something we’ll have to see as this is yet another unprecedented change from a large IT conglomerate.

As someone who’s been involved in the IT industry for the better part of 2 decades now the amount of change that’s happened in the last couple years has been, honestly, staggering. We’ve seen IBM sell off some of its core manufacturing capability (the one no one got fired for buying), Dell buy back all its stock to become a private company again and now HP, the last of the 3 PC giants, divest itself into 2 companies. It will likely take years before all the effects of these changes are really felt but suffice to say that the PC industry of the future will look radically different to that of the past.

FULL DISCLOSURE: The writer is a current employee of Dell. All opinions expressed in this article are of the writer’s own and are not representative of Dell.

Windows 10 Start Menu

Windows 10: The Windows 8 For Those Who Can’t Get Over 7.

Microsoft really can’t seem to win sometimes. If they stop making noticeable changes to their products everyone starts whining about how they’re no longer innovating and that people will start to look for alternatives. However should they really try something innovative everyone rebels, pushing Microsoft to go back to the way things ought to be done. It happened with Vista, the Ribbon interface and most recently with Windows 8. Usually what happens though is that the essence of the update makes it into the new version with compromises made to appease those who simply can’t handle change.

And with that, ladies and gentlemen, Microsoft has announced Windows 10.

Windows 10 Start Menu

Everyone seems to be collectively shitting their pants over the fact that Microsoft skipped a version number, somehow forgetting that most of the recent versions of Windows have come sans any number at all. If you want to get pedantic about it (and really, I do) the last 10 versions of Windows have been: Windows 3.1, Windows 95, Windows 98, Windows NT 4.0, Windows 2000, Windows ME (gag), Windows XP, Windows Vista, Windows 7 and Windows 8. If you were expecting them to release Windows 9 because of the last 2 versions of Windows just happened to be in numerical order I’m going to hazard a guess you ate a lot of paint as a child.

On a more serious note the changes that many people were expecting to make up the 8.2 release appear to have been bundled into Windows 10. The start menu makes its triumphant return after 2 years on the sidelines although those modern/metro apps that everyone loved to hate will now make an appearance on there. For someone like me who hasn’t really relied on the start menu even since before Windows 8 arrived (pressing the window key and then typing in what I want is much faster than clicking my way through the menu) I’m none too bothered with its return. It will probably make Windows 10 more attractive to the enterprise though as many of them are still in the midst of upgrading from XP (or purposefully delaying upgrading to 8).

The return of the start menu goes hand in hand with the removal of the metro UI that hosted those kinds of apps, which have now been given the ability to run in a window on the desktop. This is probably one of the better improvements as it no longer means you get a full screen app taking over your desktop if you accidentally click on something that somehow associated itself with a metro app. For me this most often seems to happen with mail as even though I’ve got Outlook installed the Mail app still seems to want to launch itself every so often. Whether or not this will make that style of apps more palatable to the larger world will have to remain to be seen, however.

There’s also been a few other minor updates announced like the inclusion of multiple desktops and improved aero-snap. The command line has also received a usability update, now allowing you to use CTRL + C and CTRL + V to copy and paste respectively. In all honesty if you’re still doing your work in the command line on any version of Windows above Vista you’re doing it wrong as PowerShell has been the shell of choice for everyone for the better part of 7 years. I’m sure some users will be in love with that change but the vast majority of us moved on long ago.

The release date is scheduled for late next year with a technical preview available right now for enterprising enthusiasts. It will be interesting to see what the take up rate is as that date might be a little too late for enterprises who are still running XP who will most likely favour 7 instead. That being said the upgrade path from 7 to 10 is far easier so there is the possibility of Windows 10 seeing a surge in uptake a couple years down the road. For those early adopters of Windows 7 this next release might just be hitting the sweet spot for them to upgrade so there’s every chance that 10 will be as successful as 7.

I’ll reserve my judgement on the new OS until I’ve had a good chance to sit down and use it for an extended period of time. Microsoft rarely makes an OS that’s beyond saving (I’d really only count ME in there) and whilst I might disagree with the masses on 8’s usability I can’t fault Microsoft for capitulating to them. Hopefully the changes aren’t just skin deep as this is shaping up to be the last major revision of Windows we’ll ever see and there’d be nothing worse than for Microsoft to build their future empire on sand.

IMG_4732

ASUS Transformer Pad TF103C Review.

I’ve only really owned one tablet, the original Microsoft Surface RT, and try as I might to integrate it into parts of my life I honestly really can’t figure out where it fits in. Primarily I think this is a function of apps as whilst the Surface is capable in most respects there’s really no killer feature that makes me want to use it for that specific purpose. Indeed this is probably due to my heavy embedding within the Android ecosystem, with all the characteristics that make my phone mine persisted across Google’s cloud. With that in mind when ASUS offered me a review unit of their new Transformer Pad TF103C for a couple weeks to review I was intrigued to see how the experience would compare.

IMG_4732

The TF103C is a 10.1″ tablet, sporting a quad core, 64 bit Intel Atom processor that runs at up to 1.86GHz. For a tablet those specs are pretty high end which, considering the included keyboard signals that the TF103C is aimed more towards productivity than simply being a beefy Android tablet. The screen is an IPS display with a 1200 x 800 resolution which is a little on the low side, especially now that retina level displays are fairly commonplace. You can get it with either 8GB or 16GB of internal storage which you can easily upgrade to 64GB via the embedded SDHC slot. It also includes the usual affair of wireless interfaces, connectors and sensors although one feature of note is the full sized USB port on the dock. With a RRP of $429 (with street prices coming in well under that) there’s definitely a lot packed in the TF103C for the price.

As a full unit the TF103C is actually pretty hefty. coming in at a total 1.1KGs although the tablet itself only makes up about half that. The keyboard dock doesn’t contain an additional battery or anything else that you’d think would make it so heavy, especially considering other chiclet style keyboards come in at about half that. Considering my full ultrabook weighs in at about 1.5KGs it does take away some of the appeal of having a device like this, at least from my perspective. That being said I’m not exactly the biggest tablet user, so the use of two different form factors is lost on me somewhat.

When used in docked form the TF103C is actually quite capable, especially when you attach a mouse to the dock’s USB port. I had wondered how Android would fair when used in a more traditional desktop way and it actually works quite well, mostly since the web versions of your typical productivity applications have evolved a lot in the past couple years. The keyboard is probably a little on the small side for people with larger hands but it was definitely usable for quick tasks or replying to email. It falls a little short if you’re going to use it on your lap however due to the fact that the screen can’t be tilted back past a certain point. It’s still usable but it’s a much better experience when used on a desk.

The quad core Intel Atom powering the TF103C is extremely capable, as evidenced by the fact that everything on it runs without a stutter or hiccup. I threw a few of the more intensive games I could find at it and never noticed any slowdown, commendable for a tablet in this price range. When you’re using such performance however the battery life does take quite a hit, knocking the rated 9.5 hours of run time to less than 4. That being said it managed to stay charged for about a week when it was idle making it quite usable as a casual computing device.

All in all I was impressed with the capabilities the TF103C displayed, even if I couldn’t really see it replacing any one of the devices I have currently. There’s a few missed opportunities, like integrating a battery into the keyboard and allowing the screen to tilt more, however overall it’s a very capable device for the asking price. I could definitely see it having a place on the coffee table as something to be used when needed with the added keyboard dock capability coming in handy for more grunty work. It might not end up replacing the device you have now but if you’re looking for a decent tablet that can also be productive then you wouldn’t go wrong with the TF103C.

A review unit was provided to The Refined Geek for 2 weeks for reviewing purposes.

Medieval vs Modern: The Making of a Gargoyle.

One thing that always fascinates me is how much (or indeed how little) technology can change some processes. Technology almost always makes things better, faster and cheaper but you’d think there’s a few areas where technology simply couldn’t put a dent in good old fashioned human processes. I don’t know why but when I saw the following video I thought there would be no way that modern processes could be better suited to the task than simply giving it over to a stone mason. By the end of the video however I was stunned at just how fast, and accurate, we could mill out a giant block of sandstone.

Honestly I probably should have expected it as I’ve seen numerous demonstrations of similar technology producing wildly intricate show pieces using all sorts of material. However I figured something like this, a craft that many would have thought was now in the domain of only a handful of dedicated practitioners, would be better suited to human hands. I have to say though that I doubt anyone today could carve out something like that in the space of 10 hours, even if you counted in all the preparation time they did before hand. It’s surprisingly hard to find out just how long it took to carve your average stone gargoyle unfortunately so I’m not sure how this compares to times when stone carving a s a profession was more common.

Realistically though that’s all a flimsy premise for me to post yet another large engineering demonstration video. I can’t help it though, they tickle me in all the right ways :)

IBM_Watson

IBM’s Watson has an API, and It’s Answering Questions.

In a world where Siri can book you a restaurant and Google Now can tell you when you should head for the gate at the airport it can feel like the AI future that many sci-fi fantasies envisioned is already here. Indeed to some extent it is, many aspects of our lives are now farmed out to clouds of servers that make decisions for us, but those machines still lack a fundamental understanding of, well, anything. They’re what are called expert systems, algorithms trained on data to make decisions in a narrow problem space. The AI future that we’re heading towards is going to be far more than that, one where those systems actually understand data and can make far better decisions based on that. One of the first steps to this is IBM’s Watson and it’s creators have done something amazing with it.

IBM_Watson

Whilst currently only open to partner developers IBM has created an API for Watson, allowing you to pose it a question and receive an answer. There’s not a lot of information around what data sets it currently understands (the example is in the form of a Jeopardy! question) but their solution documents reference a Watson Content Store which, presumably, has several pre-canned training sets to get companies started with developing solutions. Indeed some of the applications that IBM’s partner agencies have already developed suggest that Watson is quite capable of digesting large swaths of information and providing valuable insights in a relatively short timeframe.

I’m sure many of my IT savvy readers are seeing the parallels between Watson and a lot of the marketing material that surrounds anything with the buzzword “Big Data”. Indeed much of the concepts of operation are similar: take big chunks of data, throw them into a system and then hope that something comes out the other end. However Watson’s API suggests something that’s far more accessible, dealing in native human language and providing evidence to back up the answers it gives you. Compare this to Big Data tools, which often require you to either learn a certain type of language or create convoluted reports, and I think Watson has the ability to find widespread use while Big Data keeps its buzzword status.

For me the big applications for something like this come for places where curating domain specific knowledge is a long, time consuming task. Medicine and law both spring to mind as there’s reams of information available to power a Watson based system and those fields could most certainly benefit from having easier access to those vast treasure troves. It’s pretty easy to imagine a lawyer looking for all precedents set against a certain law or a doctor asking for all diseases with a list of symptoms, both queries answered with all the evidence to boot.

Of course it remains to be seen if Watson is up to the task as whilst it’s prowess on Jeopardy! was nothing short of amazing I’ve still yet to see any of its other applications in use. The partner applications do look very interesting, and should hopefully be the proving grounds that Watson needs, but until it starts seeing widespread use all we really have to go on is the result of a single API call. Still I think it has great potential and hopefully it won’t be too long before the wider public can get access to some of Watson’s computing genius.

Tailoring Stuff

When Will Buying Clothing Online be as Good as Offline?

I’m not exactly what you’d call a fashionista, the ebbs and flows of what’s current often pass me by, but I do have my own style which I usually refresh on a yearly basis. More recently this has tended towards my work attire, mostly because I spend a great deal more time in it than I did previously. However the act of shopping for clothes is one I like to avoid as I find it tiresome, especially when trying to find the right sizes to fit my not-so-normal dimensions. Thus I’ve recently turned towards custom services and tailoring in order to get what I want in the sizes that fit me but, if I’m honest, the online world still seems to be light years behind that which I can get from the more traditional fashion outlets.

Tailoring Stuff

For instance one of the most frustrating pieces of clothing for me to buy is business shirts. Usually they fall short in one of my three key categories (length, sleeve length and fit in the mid section) so I figured that getting some custom made would be a great way to go. So I decided that I’d last out for a couple shirts from 2 online retailers, Original Stitch and Shirts My Way, to see if I could get something that would tick all 3 categories. I was also going to do a review of them against each other to see which one of the retailers provided the better fit and would thus become my defacto supplier of shirts for the foreseeable future. However upon receiving both shirts I was greeted with the unfortunate reality: they both sucked.

They seemed to get some of the things right, like the neck size and overall shirt length, however they both seemed to be made to fit someone who weighed about 40kg more than I do with the mid section being like a tent. Both of them also had ridiculously billowy sleeves, making my arms appear to be twice as wide as they should be. I kind of expected something like this to happen with Original Stitch, since their measurements aren’t exactly comprehensive, but Shirts My Way also suffered from the same issues even though I followed their guidelines exactly. Comparing this to the things I’ve had fitted or tailored in the past I was extremely disappointed as I was expecting as good or better service.

The problem could be partially solved by technology as 3D scanning could provide extremely accurate sizing that online stores could then incorporate in order to ensure you got the right fit the first time around. In fact I’d argue that there should be some kind of open standard for this, allowing all the various companies to develop their brand of solutions for it that would be interoperable between different clothing companies. That is something of a pipe dream, I know, but I can’t be the only person who has had this kind of frustration trying to get the right fits from online retailers.

I guess for now I should just stick with the tried and true methods for getting the clothing that I want as the online experience, whilst infinitely more convenient, ultimately delivers a lacklustre product. I’m hopeful that change is coming although it’s going to take time for it to become widespread and I’m sure that there won’t be any standards across the industry for a long time after that. Maybe one day I’ll be able to order the right fits from the comfort of my own home but, unfortunately, that day is not today.

BBC Derp

The BBC Thinks all VPN Users are Pirates.

If you want Netflix in Australia there’s really only one way to do it: get yourself a VPN with an endpoint in the states. That’s not an entirely difficult process, indeed many of my less tech savvy friends have managed to accomplish it without any panicked phone calls to me. The legality of doing that is something I’m not qualified to get into but since there hasn’t been a massive arrest spree of nefarious VPN users I can’t imagine it’s far outside the bounds of law. Indeed you couldn’t really do that unless you also cracked down on the more legitimate users of VPN services, like businesses and those with regulatory commitments around protecting customer data. However if you’d ask the BBC users of VPNs are nothing but dirty pirates and it’s our ISP’s job to snoop on them.

BBC Derp

In a submission to the Australian Government, presumably under the larger anti-piracy campaign that Brandis is heading, the BBC makes a whole list of suggestions as to how they should go about combating Australia’s voracious appetite for purloined content. Among the numerous points is the notion that a lot of pirates now use a VPN to hide their nefarious activities. In the BBC’s world ISPs would take this as a kind of black flag, signalling that any heavy VPN user was likely also engaging in copyright infringement. They’d then be subject to the woeful idea of having their Internet slowed down or cut off, presumably if they couldn’t somehow prove that it was legitimate. Even though they go on to talk about false positives the ideas they discuss in their submission are fucking atrocious and I hope they never see the light of day.

I have the rather fortunate (or unfortunate, depending on how you look at it) ability of being able to do my work from almost anywhere I choose, including my home. This does mean that I have to VPN back into the mothership in order to get access to my email, chat and all other corporate resources which can’t be made available over the regular Internet. Since I do a lot of this at home under the BBC’s suggestion I’d probably be flagged as a potential pirate and be subject to measures to curb my behaviour. Needless to say I don’t think I’m particularly unique in this either so there’s vast potential for numerous false positives to spring up under this system.

Worse still all of those proposed measures fall on the ISP’s shoulders to design, implement and enforce. Not only would this put an undue burden on them, which they’d instantly pass onto us in the form of increased prices, it would also make them culpable when an infringing user figured out how to defeat their monitoring system. Now everyone knows that it doesn’t take long for people to circumvent these systems which, again, increases pressure on the ISPs to implement even more invasive and draconian systems. It’s a slippery slope that we really shouldn’t be going down.

Instead of constantly looking towards the stick as the solution to Australia’s piracy woes it’s time for companies, and the Australian government, to start looking at the carrot. Start looking at incentives for rights holders to license content in Australia or mandating that we get the same content at the same time for the same price as it is elsewhere. The numerous Netflix users in Australia shows there’s demand for such a service, we just need it to match the same criteria that customers overseas expect. Once we get that I’m sure you’ll see a massive reduction in the amount of piracy in Australia, coupled with the increase in sales that the right’s holders seem so desperate to protect.

Apple Watch Space Black

Now We Can Stop Talking About the iWatch.

I honestly couldn’t tell you how long I’ve been hearing people talk about Apple getting into the smartwatch business. It seemed every time that WWDC or any other Apple event rolled around there’d be another flurry of speculation as to what their wearable would be. Like most rumours details on it were scant and so the Internet, as always, circlejerked itself into a frenzy about a product that might not have even been in development. In the absence of a real product competitors stepped up to the plate and, to their credit, the devices have started to look more compelling. Well today Apple finally announced their Watch and it’s decidedly mediocre.

Apple Watch Space Black

For starters it makes the same mistake that many smartwatches do: it follows the current design trend for nearly all other smartwatches. Partly this is due to the nature of LCD screens being rectangular, limiting what you can do with them, however for a company like Apple you’d expect them to buck the trend a bit. Instead you’ve got what looks like an Apple-ized version of the Pebble Steel, not entirely unpleasing but at the same time feeling incredibly bland. I guess if you’re a fan of having a shrunken iPhone on your wrist then the style will appeal to you but honestly smartwatches which look like smartwatches are a definite turn off for me and I know I’m not alone in thinking this.

Details as to what’s actually under the hood of this thing are scarce, probably because unlike most devices Apple announces you won’t be able to get your hands on this one right away. Instead you’ll be waiting until after March next year to get your hands on one and the starting price is somewhere on the order of $350. That’s towards the premium end of the smartwatch spectrum, something which shouldn’t be entirely unexpected, and could be indicative of the overall quality of the device. Indeed what little details they’ve let slip do seem to indicate it’s got some decent materials science behind it (both in the sapphire screen and the case metals) which should hopefully make it a more durable device.

Feature wise it’s pretty much as you’d expect, sporting the usual array of notifications pushed from your phone alongside a typical array of sensors. Apple did finally make its way into the world of NFC today, both with the Apple Watch and the new iPhone, so you’ll be able to load up your credit card details into it and use the watch to make payments. Honestly that’s pretty cool, and definitely something I’d like to see other smartwatch manufacturers emulate, although I’m not entirely hopeful that it’ll work anywhere bar the USA. Apple also toutes an interface that’s been designed around the smaller screen but without an actual sample to look over I really couldn’t tell you how good or bad it would be.

So all that blather and bluster that preceded this announcement was, surprise, completely overblown and the resulting product really does nothing to stand out in the sea of computerized hand adornments. I’m sure there’s going to be a built in market from current Apple fans but outside that I really can’t see the appeal of the Apple Watch over the numerous other devices. Apple does have a good 6 months or so to tweak the product before release so there’s potential for it to become something before they drop it on the public.

Professional Memory Holder

DDR4 Appears on The Market; I Realise I’ve Been Under a Rock.

Whilst I don’t spend as much time as I used to keeping current with all things PC hardware related I still maintain a pretty good working knowledge of where the field is going. That’s partly due to my career being in the field (although I’m technically a services guy) but mostly it’s because I love new tech. You’d think then that DDR4, the next generation in PC memory, making its commercial debut wouldn’t be much of a surprise to me but I had absolutely no idea it was in the pipeline. Indeed had I not been building out a new gaming rig for a friend of mine I wouldn’t have known it was coming, nor that I could buy it today if I was so inclined.

Professional Memory Holder

Double Data Rate Generation 4 (DDR4) memory is the direct successor to the current standard, DDR3, which has been in widespread use since 2007. Both standards (indeed pretty much all memory standards) were developed by the Joint Electron Device Engineering Council (JEDEC) who have been working on DDR4 since about 2005. The reasoning behind the long lead times on new standards like this is complicated but it comes down to a function of getting everyone to agree to the standard, manufacturers developing products around said standard and then, finally, them making their way into the hands of consumers. Thus whilst new memory modules come and go with the regular tech cycle typically the standards driving them remain standard for the better part of a decade or two which is probably why this writer neglected to keep current on it.

In terms of actual improvements DDR4 seems like an evolutionary step forward rather than a revolutionary one. That being said the improvements introduced with the new specification are nothing to sneeze at with one of the big improvements being a reduction in the voltage (and thus power) that the specification requires. Typical DDR4 modules will now use 1.2V compared to DDR3’s 1.5V and the low voltage variant, typically seen in low power systems like smartphones and the like, goes all the way down to 1.05V.To end consumers this won’t mean too much but for large scale deployments the savings from running this new memory add up very quickly.

As you’d expect there’s also been a bump up in the operating speed of DDR4 modules, ranging from 2133Mhz all the way up to 4266Mhz. Essentially the lowest tier of performance DDR4 memory will match the top performers of DDR3 and the amount of headroom for future development is quite significant. This will have a direct impact on the performance of systems that are powered by DDR4 memory and whilst most consumers won’t notice the difference it’s definitely going to be a defining feature of enthusiast PCs for the next couple years. I know that I updated my dream PC specs to include it even though the first generation of products is only just hitting the market.

DDR4 chips are also meant to be a lot more dense than their DDR3 predecessors, especially considering that the specification has also accommodated 3D layering technologies like Samsung’s V-NAND. Many are saying that this will lead to DDR4 being cheaper for a comparable amount of memory vs DDR3 however right now you’ll be paying about a 40% premium on pretty much everything if you want to build a system around the new style of memory. This is to be expected though and whilst I can eventually see DDR4 eclipsing DDR3 on a price per gigabyte basis that won’t be for several years to come. DDR3 has 7 years worth of economies of scale built up and they won’t become irrelevant for a very long time.

So whilst I might be a little shocked that I was so out of the loop I didn’t know a new memory standard had made its way into reality I’m glad it has. The improvements might be incremental rather than a bold leap forward but progress in this sphere is so slow that anything is worth celebrating. The fact that you can build systems with it today is just another bonus, one that I’m sure is making dents in geek’s budgets the world over.