The date for the final version of Windows has been set: July 29 of this year.
The announcement comes as a shock to no one, Microsoft had repeatedly committed to making Windows 10 generally available sometime this year, however the timing is far more aggressive than I would have expected. The Windows Insider program was going along well although the indications were that most of the builds still had a decidedly beta feel to them along with many features being missing. Indeed the latest build was released just three days ago indicating that a full release was still some time away. Microsoft isn’t one to give soft dates, especially for their flagship OS, so we can take the July 29 date as gospel from here on out.
Since everyone in the Insider program has had their hands on Windows 10 for some time now the list of features likely won’t surprise you however there were a few things that caught my eye in Microsoft’s announcement post. By the looks of it Office 2016 will be released alongside the new version of Windows including a new universal app version that’s geared towards touch devices. Considering how clumsy the desktop Office products felt on touch screens this is a welcome addition for tablet and transformer devices although I’d hazard a guess that the desktop version will still be the preferred one for many. What’s really interesting though is that OneNote and Outlook, long considered staples of the Office suite by many, will now be included in the base version of Windows for free. It’s not a big of an upset as including say Word or Excel would be but still an unexpected move none-the-less.
Many of the decidedly lacklustre default metro apps will get some new life breathed into them with an update to the universal app platform. On the surface this removes their irritating “takes over your entire desktop when launched” behaviour and makes them behave a lot more like a traditional app. Whether or not they’ll be improved to the point of usable beyond that is something that I’ll have to wait and see although I do have to admit that some of the built in apps (like the PDF reader) were quite useful to have. How the well integration between those apps, the cloud and other devices that can run universal apps, works remains to be seen although I’ve heard positive things about this experience in the past.
It seems that Microsoft has had this date in mind for some time now as all my home Windows 8.1 installs last night chirped up with a “Reserve your free Windows 10!” pop up late last night. This is the realisation of the promise Microsoft made back at the start of the year to provide a free Windows 10 update to all current consumer level customers, something I thought would likely be handled through a redemption portal or similar. However, based on the success Microsoft had in getting people to upgrade from 8 to 8.1 with a similar notification, I can see why they’ve taken this approach as it’s far more likely to get people upgrading than a free Windows 10 serial would.
What will be truly interesting to see is if the pattern of adoption continues with major Windows versions. Windows 7, which is now approaching middle age, still remains unchallenged by the previous two upstarts. The barriers to transitioning are now much lower than they once were, however customers have shown that familiarity is something they value above nearly everything else. Windows 10 has all the makings of a Windows version that consumers want but we all know that what people say they want and what they actually want are two different things.
The rumour mill has been running strong for Microsoft’s next Windows release, fuelled by the usual sneaky leaks and the intrepid hackers who relentlessly dig through preview builds to find things they weren’t meant to see. For the most part though things have largely been as expected with Microsoft announcing the big features and changes late last year and drip feeding minor things through the technical preview stream. Today Microsoft held their Windows 10 Consumer Preview event in Redmond, announcing several new features that would become part of their flagship operating system as well as confirming the strategy for the Windows platform going forward. Suffice to say it’s definitely a shake up of what we’d traditionally expect from Microsoft, especially when it comes to licensing.
The announcement that headlined the event that Windows 10 would be a free upgrade for all current Windows 7, 8, 8.1 and Windows Phone 8.1 customers who upgrade in the first year. This is obviously an attempt to ensure that Windows 10’s adoption rate doesn’t languish in the Vista/8 region as even though every other version of Windows seems to do just fine Windows 10 is still different enough for it to cause issues. I can see the adoption rate for current Windows 8 and 8.1 users to be very high, thanks to the integration with the Windows store, however for Windows 7 stalwarts I’m not so sure. Note that this also won’t apply to enterprises who are responsible for an extremely large chunk of the Windows 7 market currently.
Microsoft also announced Universal Applications which are essentially the next iteration of the WinRT framework that was introduced with Windows 8. However instead of delineating some applications to the functional ghetto (like all Metro apps were) Universal Apps instead share a common base set of functionality with additional code paths for the different platforms they support. Conceptually it sounds like a great idea as it means that the different versions of the applications will share the same codebase, making it very easy to bring new features to all platforms simultaneously. Indeed if this platform can be extended to encompass Android/iOS it’d be an incredibly powerful tool, although I wouldn’t count on that coming from Microsoft.
Xbox Live will also be making a prominent appearance in Windows 10 with some pretty cool features coming for XboxOne owners. Chief among these, at least for me, is the ability to stream XboxOne games from your console directly to your PC. As someone who currently uses their PC as a monitor for their PS4 (I have a capture card for reviews and my wife didn’t like me monopolizing the TV constantly with Destiny) I think this a great feature, one I hope other console manufacturers replicate. There’s also cross-game integration for games that use Xbox Live, an inbuilt game recorder and, of course, another iteration of DirectX. This was the kind of stuff Microsoft had hinted at doing with Windows 8 but it seems like they’re finally committed to it with Windows 10.
Microsoft is also expanding its consumer electronics business with new Windows 10 enabled devices. The Microsoft HoloLens is their attempt at a Google Glass like device although one that’s more aimed at being used with the desktop rather than on the go. There’s also the Surface Hub which is Microsoft’s version of the smart board, integrating all sorts of conferencing and collaboration features. It will be interesting to see if these things see any sort of meaningful adoption rate as whilst they’re not critical to Windows 10’s success they’re certainly devices that could increase adoption in areas that traditionally aren’t Microsoft’s domain.
Overall the consumer preview event paints Windows 10 as an evolutionary step forward for Microsoft, taking the core of the ideas that they attempted with previous iterations and reworking them with a fresh perspective. It will be interesting to see how the one year free upgrade approach works for them as gaining that critical mass of users is the hardest thing for any application, even the venerable Windows platform. The other features that are coming along as more nice to haves than anything else, things that will likely help Microsoft sell people on the Windows 10 idea. Getting this launch right is crucial for Microsoft to execute on their strategy of it being the one platform for time immaterial as the longer it takes to get the majority of users on Windows 10 the harder it will be to invest heavily in it. Hopefully Windows 10 can be the Windows 7 to Windows 8 as Microsoft has a lot riding on this coming off just right.
World of Warcraft might have been my first MMORPG but in the decade that followed I’ve played my fair share of titles in that genre. Few of them have managed to make me come back after the initial play through (indeed I think only EVE Online has) but I’m readily familiar with the idea that my character is a kind of temporal thing. All those hours I put into getting them to max level and then kitting them out with gear will likely all amount to naught when the next expansion comes out. If it didn’t I wouldn’t have much incentive to keep playing as completely maxing out a character would be a one time deal. However if you were to take the reaction to Destiny’s latest DLC it would appear that the majority of its playerbase thinks the opposite, which is strangely out of touch with reality.
I’ll admit that in the beginning Destiny’s loot system was inherently flawed. Things like Legendary engrams turning into green items meant that you had to pray to RNGesus twice in order to get the purples you desired, something which wasn’t fixed until months after launch. The raid was also just as bad as even if you ran it every week there was no guarantee you’d get the drops you needed to make it to level 30. Indeed I never did, despite my vault now being filled with 7 chatterwhite shaders (one for every week I ran it). However I still managed to progress my character in other ways, maxing out all my weapons and completing several of the exotic weapon bounties.
Then the DLC dropped and it seemed like I’d be starting from scratch again.
Except I wasn’t. Sure my exotics weren’t automatically upgraded and the new max level was 32 but I was able to complete all the new content (bar the raid) as my 29 self with my pre-DLC weapons. I even got randomly invited to the new raid with a bunch of guys just because I had everything maxed out and whilst we didn’t get past the second boss it was still awesome to give it a go without having to do anything. Once I got my head around all the new systems available to me it didn’t take long for me to figure out that I was a few strange coins, vanguard marks and commendations away from surpassing my previous level cap of 29. In fact I did just that over the weekend and I am now a proud member of the level 31 elite.
By comparison taking my level 90 paladin in World of Warcraft to level 100 took me the better part of 2 weeks and he wasn’t even ready to run the raid at that point. For the last week or so I’ve spent the majority of my time in that game gearing him up (increasing his iLvl which is directly equivalent to the Light level in Destiny) in order to be able to do the new raid. I was finally able to do it late yesterday afternoon after almost a day in game time of doing various dungeons, gathering up the crafting mats and getting lucky on a few drops. In Destiny to accomplish the same feat I didn’t have to do any of that. I simply completed a quest chain, did the weekly runs and spent a small portion of my strange coin haul on upgrading my chestpiece. It was honestly one of the most pleasant levelling up experiences I’ve ever had in a MMORPG.
I’ll forgive anyone who doesn’t recognise Destiny for the MMO that it is being angry that all their playtime has been for naught (well, mostly) but eventually they’ll have to recognise that, yes, you’re playing one and this is what happens. Bungie made the levelling up process pretty painless, so much so that a filthy casual like myself was able to bash his way to 31 in the space of a weekend. It’s not like all that gear I’ve got is automatically useless anyway either, I’m still rocking my Vision of Confluence and Atheon’s Epilogue most of the time since I haven’t found a good replacement and I think that’ll hold for some time to come. The worst part might’ve been spending 14,000 glimmer and 14 strange coins on upgrading my 2 exotics of choice but that’s nothing when glimmer is everywhere and I’ve had 50+ strange coins for weeks.
It’s probably just the loud minority having their voices heard the most in this respect as I’m sure the vast majority of all the players are actually enjoying the new content rather than bitching about it. Indeed I was content to keep my big mouth shut about it after getting some time to sit down with it over the weekend however it seems that the games churnalism sites have latched on to the faux outrage with reckless abandon. In all seriousness I hope that those who are bitching about the DLC put their money where their mouth is and walk away as it’s only a matter of time before the next DLC and I’d rather not have to listen to people whine about all their time being “wasted”.
For the longest time, far too long in my opinion, XP had been the beast that couldn’t be slayed. The numerous releases of Windows after it never seemed to make much more than a slight dent in its usage stats and it reigned as the most used operating system worldwide for an astonishing 10 years after its initial release. It finally lost its crown to Windows 7 back in October of 2011 but it still managed to hold on a market share that dwarfed many of its competitors. It’s decline was slow though, much slower than an operating system which was fast approaching end of life should have been. However last quarter saw it drop an amazing 6% in total usage, finally dropping it below the combined usage of Windows 8 and 8.1.
The reasons behind this drop are wide and varied but it finally appears that people are starting to take Microsoft’s warnings that their product is no longer supported seriously and are looking for upgrades. Surprisingly though the vast majority of people transitioning away from the aging operating system aren’t going for Windows 7, they’re going straight to Windows 8.1. This isn’t to say that 8.1 is eating away at 7’s market share however, it’s up about half a percent in the same time frame, and the upgrade path is likely due to the fact that Microsoft has ceased selling OEM copies of Windows 7. Most of those new licenses do come with downgrade rights however though I’m sure few people actually use them.
If XP’s current downward trend continues along this path then it’s likely to hit the low single digit usage figures sometime around the middle of next year. On the surface this would appear to be a good thing for Microsoft as it means that the majority of their user base will be on a far more modern platform. However at the same time the decline might just be a little too swift for people to consider upgrading to Windows 10 which isn’t expected to be RTM until late next year. Considering the take up performance of Windows 8 and 8.1 this could be something of a concern for Microsoft although there is another potential avenue: Windows 7 users.
The last time Microsoft has a disastrous release like Windows 8 the next version of Windows to take the majority of the market share was 7, a decade after the original had released. Whilst it’s easy to argue that this time will be different (like everyone does) a repeat performance of that nature would see Windows 7 being the dominant platform all the way up until 2019. Certainly this is something that Microsoft wants to avoid so it will be interesting to see how fast Windows 10 gets picked up and which segments of Microsoft’s business it will cannibalize. Should it be primarily Windows 7 based then I’d say everything would be rosy for them, however if it’s all Windows 8/8.1 then we could be seeing history repeat itself.
Microsoft is on the cusp of either reinventing itself with Windows 10 or being doomed to forever repeat the cycle which consumers have forced them into. To Microsoft’s credit they have been trying their best to break out of this mould however it’s hard to argue with the demands of the consumer and there’s only so much they can do before they lose their customer’s faith completely. The next year will be very telling for how the Microsoft of the future will look and how much of history will repeat itself.
It’s no secret that I’m something of a fan of Windows 8 but then again my experience is somewhat biased by my extreme early adopter attitude. I haven’t yet had to support it in a production environment although I have installed it on varying levels of hardware that I have access to and I’ve yet to struggle with the issues that plagued me with previous Windows releases. The thing is though, whilst I’m a firm believer in Windows 8 and the features it brings, I’m of the opinion that it probably won’t see a high level of adoption in the enterprise space as the default desktop OS but that’s not necessarily a bad thing.
Despite the fact that Windows 7 has been out for a good 4 years at this point many enterprises are still in the midsts of deploying it within their organisation. This is wholly due to the initial disaster that Windows Vista was which caused the vast majority of organisations to not consider it as a possible upgrade to their Windows XP infrastructure. Past SP1 though Vista was a perfectly usable operating system and by that point many of the OEMs had caught up with their drivers which was the main cause of headaches for Vista users. Still it seemed the damage was done and Vista never managed to gain the market share it needed, leaving many organisations languishing on XP.
Not only was this bad for Microsoft in terms of sales it was worse for the organisations who stayed on it. Now systems that were designed for XP became far more entrenched and the rework required for applications to be Vista compatible got further delayed. Thus when it finally came time to move operating systems the cost of doing so (in both real terms and the effort required) was quite a lot higher and the larger the organisation the longer the transition it would take. Indeed the organisation I’m currently working for still has XP (using Netware for directory services no less) is only just getting around to rolling out Windows 7 this year due to the numerous number of applications that require remediation.
Whilst Microsoft will likely make good on their promise of delivering more updates, like they’re doing with the Windows Blue update this year, and major releases more frequently it’s likely that organisations are still reeling from their Windows 7 transition. Windows 9 is still a way off with estimates for a release dating anywhere from mid-2014 to somewhere in 2015 but that’s around the time when enterprises will be looking to upgrade in order to get the next set of killer features as Windows 7 starts to show its age. Now it’s entirely possible that with the frequent Blue style updates that Windows 8 will become far more attractive for enterprise before this date but if history has taught us anything the disruptive versions of Windows are usually the ones that end up being skipped, and Windows 8 certainly fits that bill.
There’s definitely potential for Windows 8 to make inroads into the enterprise space as the Surface would seem to be an ideal fit for the enterprise, even if most of the usability comes from the non-Metro side of it. Developing proper Metro applications for Microsoft’s enterprise products would go a long way to improving its market penetration and I know that IT admins at large would much prefer to maintain a fleet of Surfaces than a comparable fleet of iDevices. It’s clear that Metro was primarily consumer oriented but as we know many IT decisions a top driven in nature and if they want to get more people on board providing a better tablet experience to organizational executives could be the in that Windows 8 needs.
Still after 2 decades of watching Windows releases it won’t come as a surprise if Windows 8 gets passed over in favour of its next generation cousin. What we really need to avoid though is another decade of OS stagnation as whilst Windows 7 it has the potential to keep the mentality that developed with XP alive and that just makes change more painful than it needs to be. With Microsoft being committed to more releases more often we’re in a good position to avoid this and all that’s needed is for us to continue pushing our organisations in the right direction.
It’s a sad truth that once a company reaches a certain level of success they tend to stop listening to their users/customers, since by that point they have enough validation to continue down whatever path suits them. It’s a double edged sword for the company as whilst they now have much more freedom to experiment since they don’t have to fight for every customer they also have enough rope to hang themselves should they be too ambitious. This happens more in traditional business rather than say Web 2.0 companies since the latter’s bread and butter is their users and the community that surrounds them, leaving them a lot less wiggle room when it comes to going against the grain of their wishes.
I recently blogged about VMware’s upcoming release of vSphere 5 which whilst technologically awesome did have the rather unfortunate aspect of screwing over the small to medium size enterprises that had heavily invested in the platform. At the time I didn’t believe that VMware would change their mind on the issue, mostly because their largest customers would most likely be unaffected by it (especially the cloud providers) but just under three weeks later VMware has announced that they are changing the licensing model, and boy is it generous:
We are a company built on customer goodwill and we take customer feedback to heart. Our primary objective is to do right by our customers, and we are announcing three changes to the vSphere 5 licensing model that address the three most recurring areas of customer feedback:
We’ve increased vRAM entitlements for all vSphere editions, including the doubling of the entitlements for vSphere Enterprise and Enterprise Plus.
We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license.
We adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & dev environments for example.
The first 2 points are the ones that will matter to most people with the bottom end licenses getting a 33% boost to 32GB of vRAM allocation and every other licensing level getting their allocations doubled. Now for the lower end that doesn’t mean a whole bunch but the standard configuration just gained another 16GB of vRAM which is nothing to sneeze at. At the higher end however these massive increases start to really pile on, especially for a typical configuration that has 4 physical CPUs which now sports a healthy 384GB vRAM allocation with default licensing. The additional caveat of virtual machines not using more than 96GB of vRAM means that licensing costs won’t get out of hand for mega VMs but in all honesty if you’re running virtual machines that large I’d have to question your use of virtualization in the first place. Additionally the change from a monthly average to a 12 month average for the licensing check does go some way to alleviating the pain that some users will feel, even though they could’ve worked around it by asking VMware nicely for one of those unlimited evaluation licenses.
What these changes do is make vSphere 5 a lot more feasible for users who have already invested heavily in VMware’s platform. Whilst it’s no where near the current 2 processors + gobs of RAM deal that many have been used to it does now make the smaller end of the scale much more palatable, even if the cheapest option will leave you with a meagre 64GB of RAM to allocate. That’s still enough for many environments to get decent consolidation ratios of say 8 to 1 with 8GB VMs, even if that’s slightly below the desired industry average of 10 to 1. The higher end, whilst being a lot more feasible for a small number of ridiculously large VMs, still suffers somewhat as higher end servers will still need additional licenses to fully utilize their capacity. Of course not many places will need 4 processor, 512GB beasts in their environments but it’s still going to be a factor to count against VMware.
The licensing changes from VMware are very welcome and will go a long way for people like me who are trying to sell vSphere 5 to their higher ups. Whilst licensing was never an issue for me I do know that it was a big factor for the majority and these improvements will allow them to stay on the VMware platform without having to struggle with licensing concerns. I have to then give some major kudos to VMware for listening to their community and making these changes that will ultimately benefit both them and their customers as this kind of interaction is becoming increasingly rare as time goes on.
So last week saw me pick up the components that would form my new PC, the first real upgrade I have bought in about 3 years. Getting new hardware is always an exciting experience for someone like me which is probably why I enjoy being in the datacenter so much these days, with all that new kit that I get to play with. I didn’t really have the time to build the PC until the weekend though and so I spent a good 5 days with all the parts laid out on the dining table beside me, begging me to put them together right now rather than waiting. My resolve held however and Saturday morning saw me settle down with a cup of coffee to begin the longest build I’ve ever undertaken.
I won’t go over the specifications again since I’ve already mentioned them a dozen times elsewhere but this particular build had a few unique challenges that you don’t see in regular PCs. For starters this would be my first home PC that had a RAID set in it, comprising of 4 1TB Seagate drives that would be held in a drive bay enclosure. Secondly the CPU would be watercooled using a Corsair H70 fully sealed system and since I hadn’t measured anything I wasn’t 100% sure I’d be able to fit it where I thought I could. Lastly with all these drives, watercooling and other nonsense the number of power cables required also posed a unique challenge as I wasn’t 100% sure I could get them all to fit in my mid-sized tower.
The build started off quite well as I was able to remove the old components without issue and give the case a good clean before installing bits and pieces in it. The motherboard, CPU and RAM all went together quite easily as you’d expect but when it came time to affix the mounting bracket for the watercooling I hit a bit of a stumbling block. You see the motherboard I purchased does you the favor of having the old style LGA775 mounting holes, letting you use old style coolers on the newer CPUs. This is all well and good but since the holes are only labelled properly on one side attempting to line up the backing plate with the right holes proved to be somewhat of a nightmare, especially considering that when it did line up it was at a rather odd angle. Still it mounted and fit flush to the motherboard so there was no issues there.
The next challenge was getting all the hard drives in. Taking off the front of my case to to do a dry fit of the drive bay extension showed that there was a shelf right smack bang in the middle of the 4 bays. No problem though it looked to just be screwed in however upon closer inspection it showed that the screws in the front could only be accessed by a right angle screw driver, since the holes that needed to be drilled for a regular driver hadn’t been done. After attempting several goes with a drive bit and a pair of pliers I gave up and got the drill out, leaving aluminium shavings all over the place and the shelf removed. Thankfully the drive bay extender mounted with no complaints at all after that.
Next came the fun part, figuring out where the hell the watercooling radiator would go. Initially I had planned to put it at the front of the case but the hosing was just a bit too short. I hadn’t bought any fan adapters either so mounting it on the back would’ve been a half arsed effort with cable ties and screws in the wrong place. After fooling around for a while I found that it actually fit quite snuggly under the floppy drive bays, enough so that it barely moved when I shook the case. This gave me the extra length to get to the CPU whilst also still being pretty much at the front of the case, although this also meant I could only attach one of the fans since part of the radiator was mere millimeters away from the end of the graphics card.
With everything all put together and wired up it was now the moment of truth, I took a deep breath and pressed the power button. After a tense couple milliseconds (it seemed like forever) the computer whirred into life and I was greeted with the BIOS screen. Checking around in the BIOS though revealed that it couldn’t see the 4 drives I had attached to the external SATA 6Gbps controller so I quickly booted into the windows installer to make sure they were all there. They did in fact come up and after a furious 2 hours of prodding around I found that the external controller didn’t support RAID at all, only the slower ports did. This was extremely disappointing as it was pretty much the reason why I got this particular board but figuring that the drives couldn’t saturate the old SATA ports anyway I hooked them up and was on my merry way with the Windows install being over in less than 10 minutes.
I’ve been putting the rig through its paces over the past week and I must say the biggest improvement in performance comes solely from the SSD. The longest part of the boot process is the motherboard initializing the 3 different controllers with Windows loading in under 30 seconds and being usable instantly after logging in. I no longer have to wait for things to load, every program loads pretty much instantaneously. The RAID array is none too shabby either with most games loading in a fraction of the time they used to.
Sadly with all games being optimized for consoles these days the actual performance improvement in nearly every game I’ve thrown at it has been very minimal. Still Crysis 2 with all the settings set to their maximum looks incredibly gorgeous even if I can’t seem to make it chug even on the biggest multi-player maps. The new mouse I bought (Logitech G700) is quite an amazing bit of kit too and the TRON keyboard my wife got me for my birthday just adds to the feeling that I’m using a computer from the future. Overall I’m immensely satisfied with it and I’m sure it’ll prove its worth once I throw a few more programs at it.
Speaking of which, I can’t wait to code on that beasty.
My main PC at home is starting to get a little long in the tooth, having been ordered back in the middle of 2008 and only receiving upgrades of a graphics card and a hard drive since then. Like all PCs I’ve had it suffered a myriad of problems that I just usually put up with until I stumbled across a work around, but I think the vast majority of them can be traced to a faulty motherboard (Can’t put more than 4GB of RAM in it or it won’t post) and a batch of faulty hard drives (that would randomly park the heads causing it to freeze). At the time I had the wonderful idea of buying the absolute latest so I could upgrade cheaply for the next few years, but thanks to the consolization of games I found that wasn’t really necessary.
To be honest it’s not even really necessary now either, with all the latest games still running at full resolution and most at high settings to boot. I am starting to lag on the technology front however with my graphics card not supporting DirectX 11 and everything but the RAM being 2 generations behind (yes, I have a Core 2 Duo). So I took it upon myself to build a rig that combined the best performance available of the day rather than trying to focus on future compatibility. Luckily for me it looks like those two are coinciding.
Just because like any good geek I love talking shop when it comes to building new PCs here are the specs of the potential beast in making:
The first couple choices I made for this rig were easy. Hands down the best performance out there is with the new Sandy Bridge i7 chips with the 2600K being the top of the lot thanks to its unlocked multiplier and hyperthreading, which chips below the 2600 lack. The choice of graphics cards was a little harder as whilst the Radeon comes out leagues ahead on a price to performance ratio the NVIDIA cards still had a slight performance lead overall, but hardly enough to justify the price. Knowing that I wanted to take advantage of the new SATA 6Gbps range of drives that were coming out my motherboard choice was almost made for me as the Asrock P67 seems to be one of the few that has more than 4 of the ports available (it has 6, in fact).
The choice of SSD however, whilst extremely easy at the time, became more complicated recently.
You see back in the initial pre-production review round the OCZ Vertex 3 came out shooting, blasting away all the competition in a seemingly unfair comparison to its predecessors. I was instantly sold especially considering the price was looking to be quite reasonable, around the $300 mark for a 120GB drive. Sure I could opt for the bigger drive and dump my most frequently played games on it but in reality a RAID10 array of SATA 6Gbps drives should be close enough without having to overspend on the SSD. Like any pre-production reviews I made sure to keep my ear to the ground just in case something changed once they started churning them out.
Of course, something did.
The first production review that grabbed my attention was from AnandTech, renowned for their deep understanding of SSDs and producing honest and accurate reviews. The results for my drive size of choice, the 120GB, were decidedly mixed on a few levels with it falling down in several places where the 240GB version didn’t suffer any such problems. Another review confirmed the figures were in the right ballpark although unfortunately lacking a comparison to the 240GB version. The reasons behind the performance discrepancies are simple, whilst functionally the same drives the differences come from the number of NAND chips used to create the drive. The 240GB version has double the amount of the 120GB version which allows for higher throughput and additionally grants the drive a larger scratch space that it can use to optimize its performance¹.
So of course I started to rethink my position. The main reason for getting a real SSD over something like the PCIe bound RevoDrive was that I could use it down the line as a jumbo flash drive if I wanted to and I wouldn’t have to sacrifice one of my PCIe lanes to use it. The obvious competitor to the OCZ Vertex 3 would be something like the Intel 510 SSD but the reviews haven’t been very kind to this device, putting it barely in competition with previous generation devices.
After considering all my options I think I’ll still end up going with the OCZ Vertex 3 at the 120GB size. Whilst it might not be the kind of performance in every category it does provide tremendous value when compared to a lot of other SSDs and it will be in another league when compared to my current spinning rust hard drive. Once I get around to putting this new rig together you can rest assured I’ll put the whole thing through its paces, if at the very least to see how the OCZ Vertex 3 stacks up against the numbers that have already been presented.
¹Ever wondered why some SSDs are odd sizes? They are in fact good old fashioned binary sizes (128GB and 256GB respectively) however the drive reserves a portion of that (8GB and 16GB) to use as scratch space to write and optimize data before committing it. Some drives also use it as a buffer for when flash cells become unwritable (flash cells don’t usually die, you just can’t write to them anymore) so that the drive’s capacity doesn’t degrade.
Whenever you go out to buy some piece of tech you’re pretty much guaranteed that in just a couple months time there will be something better available for the same price. I asked myself the same question when I bought my iPhone about 2 months ago and came to the decision that I might as well get the most expensive one I could get (since I could write it off) and one that I would eventually be developing for. Shortly afterwards the whole iPhone 4G leak thing happened and many people asked why I didn’t “just wait a few months” to get the new one. The answer is that the benefit of having the phone for 3 months outweighed the delay in getting the new one. I could’ve snagged myself an Android phone in the mean time but again I would’ve ended up in much the same situation as the handset of choice at that time was the HTC Incredible and now it is the HTC EVO 4G.
Last night marked the official announcement of the phone everyone told me to wait for, the iPhone 4. Realistically it would be a much more impressive device if I hadn’t heard everything there is to know about it constantly over the past 2 months (thanks to Gizmodo et. al), but that doesn’t detract from the fact that it is an improvement over the current iPhone offering. Whilst Apple’s tagline for it is “This changes everything. Again.” I’ll go on record saying that it changes as much as the iPad did with all its “magic”, that is to say not a hell of a lot.
First let’s have a look over the specifications to see what we’re actually dealing with here:
(For some reason Apple wants to make mention of the fact that their iPhone has multi-touch twice, that’s not a typo on my behalf)
First off let me compliment Apple on the things that are really something. The display is pretty phenomenal, offering the highest resolution on any smart phone I’ve seen to date. They’re calling it the Retina Display as the dots per inch (DPI) is above the magic 300 DPI threshold that our eyes are able to see. Whilst most users won’t notice a whole lot of a difference (showing people my Xperia side by side with an iPhone saw most thinking the iPhone had a better display) it does mean that it should be quite a gorgeous screen. It’s no technical marvel beyond resolution though, as its just your plain old LED back lit LCD.
The other most notable upgrades are in the guts of the phone, namely an upgrade to 802.11N wireless, a 3 axis gyro, dual mics and the new Apple A4 processor which was debuted with the iPad. They’re all quite decent upgrades and really had these been left out you’d be wondering what the hell Apple’s research and development department was doing as they’ve been standard on most phones for the past year or so. The addition of Apple’s new A4 into the iPhone 4 brings it up to speed with the latest swath of Snapdragon based Androids, hopefully paving the way for some more intensive applications to make their way onto the handheld iPlatform. The inclusion of a 3 axis gyro is interesting as no one will argue against the fact that it will make motion detection more accurate but the use cases for it are small in number. Sure your Doodle Jump will be a lot more accurate, but is it really required? Time will tell though, developers always have a habit of exploiting additional features like this in ways we don’t really expect.
For the rest of the features though I’m a little less impressed. You see way back when the 3GS (and really even the 3G model) was released dual cameras, with the back one being 5+ megapixels, were the norm on many feature and smart phones. Their omission on the iPhone was puzzling to say the least as the technology had been around for quite some time, with proven implementations across several brands. Much like the lacking of MMS in the original iPhone Apple’s omission of such features confounded the tech crowd whilst the rabid fanboy population decried that it was not required. Consequently when Apple finally caved it was touted as revolutionary, an almost textbook case of the idea of doublethink. Whilst the hype about these things is on the low at the moment I’m sure I’ll come across those who trick themselves into believing that Apple is revolutionizing this space when really they’re playing catchup with the rest of the modern world.
The inclusion of HD video recording capabilities on the iPhone is a good step forward and matches many of its competitors offerings. Whilst I’ve yet to see an actual sample of the video direct from the camera I can tell you know that it’s more of a gimmick than anything else as cameras that small just don’t have the surface area required to make decent 720p video. It’s not Apple’s fault really as any camera capable of producing proper HD video will have a sensor almost 1/5th of the size of the iPhone, with an appropriately sized lens to match. No one has extolled the virtues of the video yet so I’ll let this one slide for now but if anyone dares tell me it’s good HD I’ll probably have to take a bandsaw to their new iPhone, just to teach them a lesson.
Overall I’d say it’s a good evolution of the current iPhone offering and my issues, as always, lie in the hype and marketing behind it. Looking over the phone I can say that had I known these specs before buying my current phone (neglecting the fact that they release a new damned phone every year) I would’ve given a lot more consideration to buying an Android handset first. I’m still not so sure if it would’ve changed my mind though as 3 months is quite a wait when you’ve got a free phone voucher burning a hole in your pocket. The upgraded specs are sure to please those upgrade happy tech heads and the under the hood upgrades are sure to give the devs some new ideas with their applications.
At least there’s no magic in this phone. This post would’ve been a lot less level headed if they had used that term to describe one of their products again 😉
Have a look at the top of your web browser, notice anything different? If I’ve done everything correctly you should now be looking at this page from www.therefinedgeek.com.au and not my old address. Yes I decided to listen to my peers and buy the domain name. For now the DNS routing to this address is a bit hacky but that will all change come the 26th when I get my static IP address, so if the site is down temporarily it’s probably because my Internet disconnected and I had to manually update the host record, but I don’t see it staying down for long.
When I first built this site I was doing it mostly to get some exposure to web technologies, predominately Windows 2008 server and the goodies that it comes with. I was happy with a DynDNS account that would automatically route everyone to my website no matter what happened to my connection, but that all changed after one of my old friends contacted me.
Whilst getting a domain name was always on the table I had never really considered the potential benefit of getting one. Sure there’s the whole brand recognition stuff and the small amount of prestige from having a unique name on the web but what really got to me was how someone else could be making money off my work, without even having to do anything apart from hosting a DNS service. I guess he knew one of my weak points and wanted to help out; I get pretty motivated when I find out someone is making more from me then I think they should 😉
It’s also a natural progression from a site that started out as just a test bed for various web technologies but evolved into the creative outlet I use it for today. I’ve also never really worked with a proper domain name and if you were unfortunate enough to come across the site whilst I was getting everything right you would’ve been greeted by various levels of errors, funny looking pages and redirection loops. All part of the process, and it was a good hour of fooling around to get everything right.
So, update your bookmarks, RSS feeds and whatever else you may have this site flagged as. I’ll probably keep the old link up for a little while before turning it off, as I don’t want people relying on that one