Technology

ozo-press-photo-nature

Nokia Resurfaces as a…Virtual Reality Video Startup?

Nokia was once the king of the phones that everyone wanted. For many it was because they made a solid handset that did what it needed to do: make calls and send text messages. Their demise came at their inability to adapt to the rapid pace of innovation that was spurred on by Apple and Google, their offerings in the smartphone space coming too late, their customers leaving for greener pastures. The result was that their handset manufacturing capability was offloaded to Microsoft but a small part of Nokia remained independent, one that held all the patents and their research and development arm. It seems that that part of Nokia is looking to take it in crazy new directions with their first product being the Ozo, a 360 degree virtual reality video camera.

ozo-press-photo-nature

Whilst Nokia isn’t flooding the newswaves with details just yet we do know that the Ozo is a small spherical device that incorporates 8 cameras and microphones that are able to capture video and sound from any angle. It’s most certainly not the first camera of its kind with numerous competitors already having products available in this space but it is most certainly one of the better looking offerings out there. As for how it’d fare against its competition that’s something we’ll have to wait to see as the first peek at the Ozo video is slated to come out just over a week from now.

At the same time Nokia has taken to the Tongal platform, a website that allows brands like Nokia to coax filmmakers into doing stuff for them, to garner proposals for videos that will demonstrate the “awesomeness” of the Ozo platform. To entice people to participate there’s a total of $42,000 and free Ozo cameras up for grabs for two lucky filmmakers, something which is sure to attract a few to the platform. Whether that’s enough to make them the platform of choice for VR filmmakers though is another question, one I’m not entirely sure that Nokia will like the answer to.

You see whilst VR video has taken off of late due to YouTube’s support of the technology it’s really just a curiosity at this point. The current technology strictly prohibits it from making its way into cinemas, due to the fact that you’d need to strap an Oculus Rift or equivalent to your head to experience it. Thus it’s currently limited in appeal to tech demos, 3D renderings and a smattering of indie things. Thus the market for such a device seems pretty small, especially when you consider there’s already a few players selling their products in this space. So whilst Nokia’s latest device may be a refreshing change for the once king of phones I’m not sure it’ll become much more than a hobby for the company.

Maybe that’s all Nokia is looking for here, throwing a wild idea out to the public to see what they’d make of it. Nokia wasn’t exactly known for its innovation once the smartphone revolution began but perhaps they’re looking to change that perception with the Ozo. I’m not entirely convinced it will work out for them, anyone can throw together a slick website with great press shots, but the reaction from the wider press seems to indicate that they’re excited about the potential this might bring.

3D_XPoint_Die

Intel and Micron Announce 3D Xpoint Memory.

The never-ending quest to satisfy Moore’s Law means that we’re always looking for ways to making computers faster and cheaper. Primarily this focuses on the brain of the computer, the Central Processing Unit (CPU), which in most modern computers is now how to transistors numbering in the billions. All the other components haven’t been resting on their laurels however as shown by the radical improvement in speeds from things like Solid State Drives (SSDs), high-speed interconnects and graphics cards that are just as jam-packed with transistors as any CPU is. One aspect that’s been relatively stagnant however has been RAM which, whilst increasing in speed and density, has only seen iterative improvements since the introduction of the first Double Data Rate (DDR). Today Intel and Micron have announced 3D Xpoint, a new technology that sits somewhere between DRAM and NAND in terms of speed.

3D_XPoint_Die

Details on the underlying technology are a little scant at the moment however what we do know is that instead of storing information by trapping electrons, like all memory currently does, 3D Xpoint (pronounced cross point) instead stores bits via a change in resistance of the memory material. If you’re like me you’d probably think that this was some kind of phase change memory however Intel has stated that it’s not. What they have told us is that the technology uses a lattice structure which doesn’t require transistors to read and write cells, allowing them to dramatically increase the density, up to 128GB per die. This also comes with the added benefit of being much faster than current NAND technologies that power SSDs although slightly slower than current DRAM, albeit with the added advantage of being non-volatile.

Unlike most new memory technologies which often purport to be the replacements for one type of memory or another Intel and Micron are position 3D Xpoint as an addition to the current architecture. Essentially your computer has several types of memory, all of which are used for a specific purpose. There’s memory directly on the CPU which is incredibly fast but very expensive, so there’s only a small amount. The second type is the RAM which is still fast but can be had in greater amounts. The last is your long term storage, either in the form of spinning rust hard drives or a SSD. 3D Xpoint would sit in between the last two, providing a kind of high speed cache that could hold onto often used data that’s then persisted onto disk. Funnily enough the idea isn’t that novel, things like the XboxOne use a similar architecture, so there’s every chance that it might end up happening.

The reason why this is exciting is because Intel and Micron are already going into production with these new chips, opening up the possibility of a commercial product hitting our shelves in the very near future. Whilst integrating it in the way that they’ve stated in the press release would take much longer, due to the change in architecture, there’s a lot of potential for a new breed of SSD drives to be based on this technology. They might be an order of magnitude more expensive than current SSDs however there are applications where you can’t have too much speed and for those 3D Xpoint could be a welcome addition to their storage stack.

Considering the numerous technological announcements we’ve seen from other large vendors that haven’t amounted to much it’s refreshing to see something that could be hitting the market in short order. Whilst Intel and Micron are still being mum on the details I’m sure that the next few months will see more information make its way to us, hopefully closely followed by demonstrator products. I’m very interested to see what kind of tech is powering the underlying cells as a non-phase change, resistance based memory is something that would be truly novel and, once production hits at-scale levels, could fuel another revolution akin to the one we saw with SSDs all those years ago. Needless to say I’m definitely excited to see where this is heading and I hope Intel and Micron keep us in the loop with the new developments.

Technology 2013 montage

facebook
online shopping
icloud
twitter
wi-fi
NBNCo national broadband

Broadband Has to Remain Expensive Because…NBN? Come on…

In terms of broadband Australia doesn’t fair too well, ranking somewhere around 58th in terms of speed whilst being among some of the most expensive, both in real dollar terms as well as in dollars per advertised megabit. The original FTTN NBN would’ve elevated us out of the Internet doldrums however the switch to the MTM solution has severely dampened any hopes we had of achieving that goal. However if you were to ask our current communications minister, the esteemed Malcolm Turnbull, what he thought about the current situation he’d refer you to a report that states we need to keep broadband costs high in order for the NBN to be feasible. Just like with most things that he and his department have said about the NBN this is completely incorrect and is nothing more than pandering to current incumbent telcos.

Technology 2013 montage facebook online shopping icloud twitter wi-fi NBNCo national broadband

The argument in the submission centers around the idea that if current broadband prices are too cheap then customers won’t be compelled to switch over to the new, obviously vastly more expensive, NBN. The submission makes note that even a 10% reduction in current broadband prices would cause this to happen, something which could occur if Telstra was forced to drop their wholesale prices. A quick look over the history of the NBN and broadband prices in Australia doesn’t seem to support the narrative they’re putting forward however, owing mostly to the problems they claim would come from a price drop already happening within Australia.

You see if you take into consideration current NBN plan pricing the discrepancies are already there, even when you go for the same download speeds. A quick look at iiNet’s pricing shows that your bog standard ADSL2+ connection with a decent amount of downloads will cost you about $50/month whereas the equivalent NBN plan runs about $75/month. Decreasing the ADSL2+ plan by 10%, a whopping $5, isn’t going to change much when there’s already a $25/month price differential between the two. Indeed if people only choose the cheaper option then we should’ve seen that in the adoption rates of the original NBN, correct?

However as the adoption rates have shown Australians are ready, willing and able to pay a premium for better Internet services and have been doing so for years with the original FTTP NBN. The fact of the matter is that whilst ADSL2+ may advertise NBN level speeds it almost always delivers far less than that with most customers only getting a fraction of the speeds they are promised. The FTTP NBN on the other hand delivers exactly the kind of speeds it advertises and thus the value proposition is much greater than its ADSL2+ equivalent. The MTM NBN won’t have this capability unfortunately due to its mixed use of FTTN technologies which simply can’t make the same promises about speed.

It’s things like this that do nothing to endear the Liberal party to the technical vote as it’s so easy to see through the thin veil of political posturing and rhetoric. The facts on this matter are clear, Australians want better broadband and they’re willing to pay for it. Having cheaper options aren’t going to affect this, instead they will provide the opportunity for those who are currently locked out of the broadband market to get into it. Then for those of us who have a need for faster Internet connections we’ll happily pay the premium knowing full well that we’ll get the speeds that are advertised rather than a fraction of them. The sooner the Liberal party wakes up and realises things like this the better, but I’m not holding out any hopes that they will.

7484.Restart-warning_01455B5B

Windows 10 to Have Mandatory Updates for Home Users.

Left to their own devices many home PC users will defer installing updates for as long as humanly possible, most even turning off the auto-updating system completely in order to get rid of those annoying pop ups. Of course this means that exploits, which are routinely patched within days of them being discovered, are often not installed. This leaves many unnecessarily vulnerable to security breaches, something which could be avoided if they just installed the updates once in a while. With Windows 10 it now seems that most users won’t have a choice, they’ll be getting all Microsoft updates regardless of whether they want them or not.

7484.Restart-warning_01455B5B

Currently you have a multitude of options to select from when you subscribe to Windows updates. The default setting is to let Windows decide when to download, install and reboot your computer as necessary. The second does all the same except it will let you choose when you want to reboot, useful if you don’t leave your computer on constantly or don’t like it rebooting at random. The third option is essentially just a notification option that will tell you when updates are available but it’ll be up to you to choose which ones to download install. The last is, of course, to completely disable the service something which not many IT professionals would recommend you do.

Windows 10 narrows this down to just the first two options for Home version users, removing the option for them to not install updates if they don’t want to. This is not just limited to a specific set of updates (like say security) either as feature updates as well as things as drivers could potentially find their way into this mandatory system. Users of the Pro version of Windows 10 will have the option to defer feature updates for up to 8 months (called Current Branch for Business) however past that point they’ll be cut off from security updates, something which I’m sure none of them want. The only version of Windows 10 that will have long term deferral for feature updates will be the Enterprise version which can elect to only receive security updates between major Windows updates.

Predictably this has caught the ire of many IT professionals and consumers alike, mostly due to the inclusion of feature updates in the mandatory update scheme. Few would argue that mandatory security updates are a bad thing, indeed upon first hearing about this that’s what I thought it would be, however lumping in Windows feature updates alongside it makes a much less palatable affair. Keen observers have pointed out that this is likely due to Microsoft attempting to mold Windows into an as-a-service offering alongside their current offerings like Office 365. For products like that continuous (and mandatory) updates aren’t so much of a problem since they’re vetted against a single platform however for home users it’s a little bit more problematic, given the numerous variables at play.

Given that Windows 10 is slated to go out to the general public in just over a week it’s unlikely that Microsoft will be drastically changing this position anytime soon. For some this might be another reason for them to avoid upgrading to the next version of Windows although I’m sure the lure of a free version will be hard to ignore. For businesses though it’s somewhat less of an issue as they still have the freedom to update how they please. Microsoft has shown however that they’re intent on listening to their consumer base and should there be enough outrage about this then there’s every chance that they’ll change their position. This won’t be stopping me from upgrading, of course, but I’m one of those people who has access to any version I may want.

Not everyone is in as fortunate position as I am.

88339d8cc8_polymere_fullerene

Polymer Photovoltaic Cells See Efficiency Boost with Better Designs.

The solar cells you see on many roofs today are built out of silicon, the same stuff that powers your computer and smartphone. The reasons for this are many but it mostly comes down to silicon’s durability, semiconductor properties and ease at which we can mass produce them thanks to our investments in semiconductor manufacturing. However they’re not the only type of solar cell we can create, indeed there’s a different type that’s based on polymers (essentially plastic) that has the potential to be much cheaper to manufacture. However the technology is still very much in its infancy with the peak efficiency (the rate at which it can convert sunlight into electricity) being around 10%, far below even that available from commercial grade panels. New research however could change that dramatically.

88339d8cc8_polymere_fullerene

The current standard for organic polymer based solar cells utilizes two primary materials. The first is, predictably, an organic polymer that can accept photons and turn them into electronics. These polymers are then doped with a special structure of carbon called fullerene, more commonly known as buckyballs (which derive their name from their soccer ball like structure). However the structures that form with current manufacturing processes are somewhat random. This often means that when a photon produces a free electron it recombines before it can be used to generate electricity which is what leads to polymer cell’s woeful efficiency. New research however points to a way to give order to this chaos and, in the process, greatly improve the efficiency.

Researchers at the USA’s Department of Energy’s SLAC National Accelerator Laboratory have developed a method to precisely control the layout of the polymers and fullerene, rather than the jumbled mess that is currently standard. They then used this method to test various different arrangements to see which one produced the highest efficiency. Interestingly the best arrangement was one that mimicked the structure we see in plants when they photosynthesize. This meant that the charge created in the polymer by a photon wasn’t recombined instantly like it usually was and indeed the polymers were able to hold charge for weeks, providing a major step up in efficiency.

Whilst this research will go a long way to solving one of the major problems with polymer based solar cells there are still other issues that will need to be addressed before they become commercially viable. Whilst a typical silicon solar cell will last 20 years or more a polymer one will only last a fraction of that time, usually only 4 years or so with current technology. For most solar cells that amount of time is when they’ve just paid back their initial investments (both in terms of energy and revenue) so until they get past this roadblock they will remain an inferior product.

Still research like this shows there’s potential for other technologies to compete in the same space as silicon, even if there are still drawbacks to be overcome. Hopefully this research will provide further insights into increasing the longevity of these panels at the same time as increasing their efficiency. Then polymer panels could potentially become the low cost, mass produced option enabling a new wave of investment to come from consumers who were previously locked out by current photovoltaic pricing.

Two Dell Alienware 13 Non-Touch notebook computers.

The Ultrabook Upgrade Conundrum.

I’ve had my ASUS Zenbook UX32V for almost three years now and, if I’m quite honest, the fact that it’s managed to last this long has surprised me. Notsomuch from a “it’s still working” perspective, more that it still seems just as capable today as it did back then. Still it has begun to show its age in some regards, like the small 28GB SSD (which for some reason doesn’t show up as a unified device) being unable to do any in-place upgrades due to the limited space. Plus I figured this far down the line there was bound to be something better, sleeker and, possibly, far cheaper and so I began the search for my ultrabooks replacement. The resulting search has shown that, whilst there’s dozens of options available, compromise on one or more aspects is the name of the game.

Two Dell Alienware 13 Non-Touch notebook computers.

Two Dell Alienware 13 Non-Touch notebook computers.

Essentially what I was looking for was a modern replacement of the UX32V which, in my mind had the following features: small, light, discrete graphics and a moderately powerful CPU. Of course I’d be looking to improve on most other aspects as much as I could such as a better screen, longer battery life (it’ll get at most a couple hours when gaming now) and a large SSD so I don’t run into the same issues that I have been. In general terms pretty much every ultrabook out there ticks most of those boxes however once I start adding in certain must-have features things start to get a little sticky.

For starters a discrete graphics card isn’t exactly standard affair for an ultrabook, even though I figured since they crammed in a pretty powerful unit into the UX32V that they’d likely be everywhere the next time I went to look. No for most ultrabooks, which seem to be defined as slim and light laptops now, the graphics card of choice is the integrated Intel chipset, one that isn’t particularly stellar for anything that’s graphically intensive. Larger ultrabooks, especially those with very high res screens, tend to come with a lower end discrete card in them but, unfortunately, they also bring with them the added bulk of their size.

Indeed it seems anything that brings with it a modicum of power, whether it be from the discrete graphics chip or say a beefier processor, also comes with an additional increase in heft. After poking around for a while I found out that many of the smaller models came with a dual core chip, something which can mean it will be CPU bound for tasks. However adding in a quad core chip usually means the laptop swells in thickness in order to accommodate the additional heat output of the larger chip, usually pushing it out of ultrabook territory.

In the end the conclusion I’ve come to is that a sacrifice needs to be made so that I can get the majority of my requirements met. Out of all the ultrabooks I looked at the Alienware 13 (full disclosure: I work for Dell, their parent company) meets most of the specifications whilst unfortunately falling short on the CPU side and also being noticeably thicker than my current Zenbook is. However those are two tradeoffs I’m more than willing to make given the fact it meets everything other requirement I have and the reviews of it seem to be good. I haven’t taken the plunge yet, I’m still wondering if there’s another option out there that I haven’t seen yet, but I’m quickly finding out that having all the choice in the world may mean you really have no choice at all.

universal-windows-apps_thumb

Microsoft Builds Four Bridges to Universal Apps.

Windows 8 was supposed bring with it the platform by which developers could produce applications that would have consistent experiences across platforms. This came in the form of Metro (and now Modern) apps which would be powered by the WinRT framework, something which had all the right technological bells and whistles to make such a thing possible. However with the much maligned desktop experience, most of which was focused specifically at the Metro apps, the platform unification dream died a quick death. Microsoft hasn’t left that dream behind though and their latest attempt to revive it comes to us in the form of Universal Applications. This time around however they’re taking a slightly different approach: letting the developers build what they want and giving them the option of porting it directly across to the Windows platform.

universal-windows-apps_thumb

Under the hood the architecture of Universal Apps is similar to that of their Metro predecessors, providing a core common set of functionality across platforms, however the difference comes in the form of developers being able to create their own platform specific code on top of the core binary. This alleviates the main issue which most people had with Metro apps of the past (I.E. they felt out of place pretty much everywhere) and allows developers to create their own UX for each platform they want to target. This coupled with the new “4 bridges” strategy, which defines a workflow for each major platform to come into the Universal App fold, means that Microsoft has a very compelling case for developers to spend time on bringing their code across.

As I talked about previously the two major smartphone platforms get their own bridge: Project Islandwood (iOS) and Project Astoria (Android). Since the first announcement it doesn’t seem that much has changed with this particular strategy however one key detail I didn’t know at the time was that you’d be able to directly import your Xcode project into Visual Studio, greatly reducing the effort required to get going. What kind of support they’ll have for Android applications, like whether or not they’ll let you import Eclipse projects, still remains to be seen unfortunately. They’ve also announced the bridge for web applications (Project Westminster) although that’s looking more and more like a modern version of ActiveX rather than something that web developers will be actually interested in pursuing.

The latest bridge to be announced is Project Centennial, a framework that will allow developers to port current Win32 based applications to the Universal platform. Whilst this likely won’t be the end game to solving everyone’s woes with migrating poorly coded applications onto a more modern OS (App-V and other app virtualization technologies are the only real treatments for that) it does provide an avenue for potentially aging code bases to be revamped for a new platform without a herculean amount of effort. Of course this means that you’ll need both the original codebase and a willingness to rework it, both things which seem to be rare for old corporate applications that can’t seem to die gracefully. Still another option is always welcome, especially if it drives further adoption of the Universal Platform.

Universal apps seem to have all the right makings for a revolutionary platform however I can’t help but take a reserved position after what happened with WinRT and Modern Apps. Sure, Windows 10 is likely shaping up to be the Windows 7 to the ills of Windows 8, but that doesn’t necessarily mean that all the technological innovations that come along with it will be welcomed with open arms. At least now the focus is off building a tablet/mobile like experience and attempting to shoehorn it in everywhere, something which I believe is behind much of the angst with Windows 8. It’ll likely be another year before we’ll know one way or the other and I’m very keen to see how this pans out.

Solar_panels_on_a_roof

Would You Pay $10,000 to Never Pay an Electricity Bill Again?

Make no mistake; renewables are the future of energy generation. Fossil fuels have helped spur centuries of human innovation that would have otherwise been impossible but they are a finite resource, one that’s taking an incredible toll on our planet. Connecting renewable sources to the current energy distribution grid only solves part of the problem as many renewables simply don’t generate power at all times of the day. However thanks to some recent product innovations this problem can be wholly alleviated and, most interestingly, at a cost that I’m sure many would be able to stomach should they never have to pay a power bill again.

Solar_panels_on_a_roof

Thanks to the various solar incentive schemes that have run both here in Australia and other countries around the world the cost of solar photovoltaic panels has dropped considerably over the past decade. Where you used to be paying on the order of tens of dollars per kilowatt today you can easily source panels for under $1 per kilowatt with the installation cost not being much more than that. Thus what used to cost tens of thousands of dollars can now be had for a much more reasonable cost, something which I’m sure many would include in a new build without breaking a sweat.

The secret sauce to this however comes to us via Tesla.

Back in the early days of many renewable energy incentive programs (and for some lucky countries where this continues) the feed in tariffs were extremely generous, usually multiple times the price of a kilowatt consumed off a grid. This meant that most arrays would completely negate the energy usage of a house, even with only a short period of energy duration. However most of these programs have been phased out or reduced significantly and, for Australia at least, it is now preferable to use energy generated rather than to offset your grid consumption. However the majority of people with solar arrays aren’t using energy during peak times, significantly reducing their ROI. The Tesla Powerwall however shifts that dynamic drastically, allowing them to use their generated power when they most need it.

Your average Australian household uses around 16KW/h worth of electricity every day something which a 4KW photovoltaic system would be able to cover. To ensure that you had that amount of energy on tap at any given moment you’d probably want to invest in both a 10KW and 7KW Powerwall which could both be fully charged during an average day. The cost of such a system, after government rebates, would likely end up in the $10,000 region. Whilst such a system would likely still require a grid connection in order to smooth out the power requirements a little bit (and to sell off any additional energy generated during good days) the monthly power bill would all but disappear. Just going off my current usage the payback time for such a system is just on 6 years, much shorter than the lives of both the panels and the accompanying batteries.

I don’t know about you but that outlay seems like a no-brainer, especially for any newly built house. The cost of such a system is only going to go down with time as more consumers and companies increase their demand for panels and, hopefully, products like the Tesla Powerwall. Going off grid like this used to be in the realms of fantasy and conspiracy theorists but now the technology has been consumerised to the point where it will be soon available to anyone who wants it. If I was running a power company I’d be extremely worried as their industry is about to be heavily disrupted.

hp-machine-memristor-2015-06-05-01

HP’s “The Machine” Killed, Surprising No One.

Back in the day it didn’t take much for me to get excited about a new technology. The rapid progressions we saw from the late 90s through to the early 2010s had us all fervently awaiting the next big thing as it seemed nearly anything was within our grasp. The combination of getting older and being disappointed a certain number of times hardened me against this optimism and now I routinely attempt to avoid the hype for anything I don’t feel is a sure bet. Indeed I said much the same about HP’s The Machine last year and it seems my skepticism has paid dividends although I can’t say I feel that great about it.

hp-machine-memristor-2015-06-05-01

For the uninitiated HP’s The Machine was going to be the next revolutionary step in computing. Whilst the mockups would be familiar to anyone who’s seen the inside of a standard server those components were going to be anything but, incorporating such wild technologies as memristors and optical interconnects. What put this above many other pie in the sky concepts (of which I include things like D-Wave’s quantum computers as the jury is still out on whether or not they’re providing a quantum speedup) is that it was based on real progress that HP had made in many of those spaces in recent years. Even that wasn’t enough to break through my cynicism however.

And today I found out I was right, god damnit.

The reasons cited were ones I was pretty sure would come to fruition, namely the fact that no one has been able to commercialize memristors at scale in any meaningful way. Since The Machine was supposed to be almost solely based off of that technology it should be no surprise that it’s been canned on the back of that. Now instead of being the moonshot style project that HP announced last year it’s instead going to be some form of technology demonstrator platform, ostensibly to draw software developers across to this new architecture in order to get them to build on it.

Unfortunately this will likely end up being not much more than a giant server with a silly amount of RAM stuffed into it, 320TB to be precise. Whilst this may attract some people to the platform out of curiosity I can’t imagine that anyone would be willing to shell out the requisite cash on the hopes that they’d be able to use a production version of The Machine sometime down the line. It would be like the Sony Cell processor all over again instead of costing you maybe a couple thousand to experiment with it you’d be in the tens of thousands, maybe hundreds, just to get your hands on some experimental architecture. HP might attempt to subsidise that but considering the already downgraded vision I can’t fathom them throwing even more money at it.

HP could very well turn around in 5 or 10 years with a working prototype to make me look stupid and, honestly, if they did I would very much welcome it. Whilst predictions about Moore’s Law ending happen at an inverse rate to them coming true (read: not at all) it doesn’t mean there isn’t a few ceilings we’ve seen on the horizon that will need to be addressed if we want to continue this rapid pace of innovation. HP’s The Machine was one of the few ideas that could’ve pushed us ahead of the curve significantly and its demise is, whilst completely expected, still a heart wrenching outcome.

samsung_curved_uhdu9000_front

Curved Screens Are a Waste of Money.

Consumer electronics vendors are always looking for the next thing that will convince us to upgrade to the latest and greatest. For screens and TVs this use to be a race of resolution and frame rate however things began to stall once 1080p became ubiquitous. 3D and 4K were the last two features which screen manufacturers used to tempt us although neither of them really proved to be a compelling reason for many to upgrade. Faced with flagging sales the race was on to find another must-have feature and the result is the bevy of curved screens that are now flooding the market. Like their predecessors though curved screens don’t provide anything that’s worth having and, all things considered, might be a detrimental attribute.

samsung_curved_uhdu9000_front

You’d be forgiven for thinking that a curved screen is a premium product as they’re most certainly priced that way. Most curved screens usually tack on an extra thousand or two over an equivalent flat and should you want any other premium feature (like say it being thin) then you’re going to be paying some serious coin. The benefits of a curved screen, according to the manufacturers, is that they provide a more theatrical experience, making the screen appear bigger as more of it is in your field of view. Others will say that it reduces picture distortion as objects in the middle of a flat screen will appear larger than those at the edge. The hard fact of the matter is that, for almost all use cases, none of these attributes will be true.

As Ars Technica demonstrated last year the idea that a curved screen can have a larger apparent size than its flat counterpart only works in scenarios that aren’t likely to occur with regular viewing. Should you find yourself 3 feet away from your 55″ screen (an absolutely ludicrous prospect for any living room) then yes, the curve may make the screen appear slightly larger than it actually is. If you’re in a much more typical setting, I.E. not directly in front of it and at a more reasonable distance, then the effect vanishes. Suffice to say you’re much better off actually buying a bigger set than investing in a curved one to try and get the same effect.

The picture distortion argument is similarly flawed as most reviewers report seeing increased geometric distortions when viewing content on a curved screen. The fundamental problem here is that the content wasn’t created with a curved screen in mind. Cameras use rectilinear lenses to capture images onto a flat sensor plane, something which isn’t taken into account when the resulting image is displayed on a curved screen. Thus the image is by definition distorted and since none of the manufacturers I’ve seen talk about their image correction technology for curved screens it’s safe to assume they’re doing nothing to correct it.

So if you’ve been eyeing off a new TV upgrade (like I recently have) and are thinking about going curved the simple answer is: don’t. The premium charged for that feature nets no benefits in typical usage scenarios and is far more likely to create problems than it is to solve them. Thankfully there are still many great flat screens available, typically with all the same features of their curved brethrens for a much lower price. Hopefully we don’t have to wait too long for this fad to pass as it’s honestly worse than 3D and 4K as they at least had some partial benefits for certain situations.