Posts Tagged‘google’

Google Provides Insight Into SSD Reliability.

SSDs may have been around for some time now but they’re still something of an unknown. Their performance benefits are undeniable and their cost per gigabyte has plummeted year after year. However, for the enterprise space, their unknown status has led to a lot of hedged bets when it comes to their use. Most SSDs have a large portion of over provisioned space, to accommodate for failed cells and wear levelling. A lot of SSDs are sold as “accelerators”, meant to help speed up operations but not hold critical data for any length of time. This all comes from a lack of good data on their reliability and failure rates, something which can only come with time and use. Thankfully Google has been doing just that and at a recent conference released a paper about their findings.

SSDs

 

The paper focused on three different types of flash media: the consumer level MLC, the more enterprise focused SLC and the somewhere-in-the-middle eMLC. These were all custom devices, sporting Google’s own PCIe interface and drivers, however the chips they used were your run of the mill flash. The drives were divided into 10 categories: 4 MLC, 4 SLC and 2 eMLC. For each of these different types of drives several different metrics were collected over their 6 year lifetime: raw bit error rate (RBER), uncorrectable bit error rate (UBER), program/erase cycles and various failure rates (bad blocks, bad cells, etc.). All of these were then collated to provide insights into the reliability of SSDs and their comparison to each other and to old fashioned, spinning rust drives.

Probably the most stunning finding out of the report is that, in general, SLC drives are no more reliable than their MLC brethren. For both enterprises and consumers this is a big deal as SLC based drives are often several times the price of their MLC equivalent. This should allay any fears that enterprises had about using MLC based products as they will likely be just as reliable and far more cheaper. Indeed products like the Intel 750 series (one of which I’m using for big data analysis at home) provide the same capabilities as products that cost ten times as much and, based on Google’s research, will last just as long.

Interestingly the biggest predictive indicator for drive reliability wasn’t the RBER, UBER or even the number of PE cycles. In fact the most predictive factor of drive failure was the physical age of the drive itself. What this means is that, for SSDs, there must be other factors at play which affect drive reliability. The paper hypothesizes that this might be due to silicon aging but it doesn’t appear that they had enough data to investigate that further. I’m very much interested in how this plays out as it will likely come down to the way they’re fabricated (I.E. different types of lithography, doping, etc.), something which does vary significantly between manufacturers.

It’s not all good news for SSDs however as the research showed that whilst SSDs have an overall failure rate below that of spinning rust they do exhibit a higher UBER. What this means is that SSDs will have a higher rate of unrecoverable errors which can lead to data corruption. Many modern operating systems, applications and storage controllers are aware of this and can accommodate it but it’s still an issue for systems with mission/business critical data.

This kind of insight into the reliability of SSDs is great and just goes to show that even nascent technology can be quite reliable. The insight into MLC vs SLC is telling, showing that whilst a certain technology may exhibit one better characteristic (in this case PE cycle count) that might not be the true indicator of reliability. Indeed Google’s research shows that the factors we have been watching so closely might not be the ones we need to look at. Thus we need to develop new ideas in order to better assess the reliability of SSDs so that we can better predict their failures. Then, once we have that, we can work towards eliminating them, making SSDs even more reliable again.

First They Came for Chess, Now They Come for Go.

Computers are better than humans at a lot of things but there are numerous problem spaces where they struggle. Anything with complex branching or large numbers of possibilities forces them into costly jumps, negating the benefits of their ability to think in microsecond increments. This is why it took computers so long from beating humans at something like tic-tac-toe, a computationally simple game, to beating humans at chess. However one game has proven elusive to even the most cutting edge AI developers, the seemingly simple game Go. This is because unlike chess or other games, which often rely on brute forcing out many possible moves and calculating the best one, Go has an incomprehensibly large number of possible moves making such an approach near impossible. However Google’s DeepMind AI, using their AlphaGo algorithms, has successfully defeated the top European player and will soon face its toughest challenge yet.

Unlike previous game playing AIs, which often relied on calculating board scores of potential moves, AlphaGo is a neural network that’s undergone whats called supervised learning. Essentially they’ve taken professional level Go games and fed their moves into a neural network. Then it’s told which outcomes lead to success and which ones don’t, allowing the neural network to develop it’s own pattern recognition for winning moves. This isn’t what let them beat a top Go player however as supervised learning is a well established principle in the development of neural networks. Their secret sauce appears to be a combination of an algorithm called Monte Carlo Tree Search (MCTS) and the fact that they pitted the AI against itself in order for it to get better.

MCTS is a very interesting idea, one that’s broadly applicable to games with a finite set of moves or those with set limits on play. Essentially what a MCTS will do is select moves at random and play them out until they’re finished. Then, when the outcome of that play out is determined, the moves made are then used to adjust the weightings of how successful those potential moves were. This, in essence, allows you to determine what set of moves are most optimal by refining down the problem space to what is the most ideal set. Of course the tradeoff here is between how long and deep you want the network to search and how long you have to decide to make a move.

This is where the millions of games that AlphaGo played against itself comes into play as it allowed the both the neural networks and the MCTS algorithm to be greatly refined. In their single machine tests it only lost to other Go programs once out of almost 500 games. In the match played against Fan Hui however he was matched against a veritable army of hardware, some 170 GPUs and 1200 CPUs. That should give you some indication of just how complex Go is and what it’s taken to get to this point.

AlphaGo’s biggest challenge is ahead of it though as it prepares to face down the current top Go player of the last decade, Lee Sedol. In terms of opponents Lee is an order of magnitude higher being a 9th Dan to Fan’s 2nd Dan. How they structure the matches and their infrastructure to support AlphaGo will be incredibly interesting but whether or not it will come out victorious is anyone’s guess.

Audi Backed Part Time Scientists Take Aim at Google’s Lunar X-Prize.

Announced back in 2007 Google’s Lunar X-Prize was an incredibly ambitious idea. Originally the aim was to spur the then nascent private space industry to look beyond low earth orbit, hoping to see a new lunar rover land on the moon by 2012. As with all things space though these things take time and as the deadline approached not one of the registered teams had made enough meaningful progress towards even launching a craft. That deadline now extends to the end of this year and many of the teams are much closer to actually launching something. One of them has been backed by Audi and have their sights set on more than just the basic requirements.

CYJ17aBUEAAhpar

The team, called Part Time Scientists (PTS), has designed a rover that’s being called the Audi Lunar Quattro. Whilst details are scant as to what the specifications are the rover recently made a debut at the Detroit Auto Show where a working prototype was showcased. In terms of capabilities it looks to be focused primarily on the X-Prize objectives, sporting just a single instrument pod which contains the requisite cameras. One notable feature it has is the ability to tilt its solar panels in either direction, allowing it to charge more efficiency during the lunar day. As to what else in under the hood we don’t yet know but there are a few things we can infer from what their goals are for the Audi Lunar Quattro’s mission.

The Google Lunar X-Prize’s main objective is for a private company (with no more than 10% government funding) to land a rover on the moon, drive it 500m and stream the whole thing in real time back to earth in high definition. It’s likely that the large camera on the front is used for the video stream whilst the two smaller ones either side are likely stereoscopic imagers to help with driving it on the lunar surface. PTS have also stated that they want to travel to the resting site of the Lunar Roving Vehicle left behind by Apollo 17. This likely means that much of the main body of the rover is dedicated to batteries as they’ll need to move some 2.3KM in order to cover off that objective.

There’s a couple other objectives they potentially could be shooting for although the relative simplicity of the rover rules out a few of them. PTS have already said they want to go for the Apollo Heritage Prize so it wouldn’t be a surprise if they went for the Heritage Prize as well. There’s the possibility they could be going for the range prize as if their rover is capable of covering half the distance then I don’t see any reason why it couldn’t do it again. The rover likely can’t get the Survival Prize as surviving a Lunar night is a tough challenge with a solar powered craft. I’d also doubt its ability to detect water as that single instrument stalk doesn’t look like it could house the appropriate instrumentation to accomplish it.

One thing that PTS haven’t yet completed though, and this will be crucial to them succeeding, is locking in a launch contract. They’ve stated that they want to launch a pair of rovers in the 3rd quarter of 2017 however without a launch deal signed now I’m skeptical about whether or not this can take place. Only 2 teams competing for the Lunar X-Prize have locked in launch contracts to date and with the deadline fast approaching it’s going to get harder to find a rocket that has the required capabilities.

Still it’s exciting to see the Lunar X-Prize begin to bear fruit. The initial 5 year timeline was certainly aggressive but it appears to have helped spur on numerous companies towards achieving the lofty goal. Whilst it might take another 5 years past that original deadline to fulfill it the lessons learned and technology developed along the way will prove invaluable both on the moon and back here on earth. Whilst we’re not likely to see a landing inside of this year I’m sure we’ll something the year afterwards. That’s practically tomorrow, when you’re talking in space time.

D-Wave 2X Finally Demonstrates Quantum Speedup.

The possibilities that emerge from a true quantum computer are to computing what fusion is to energy generation. It’s a field of active research, one in which many scientists have spent their lives, yet the promised land still seems to elude us. Just like fusion though quantum computing has seen several advancements in recent years, enough to show that it is achievable without giving us a concrete idea of when it will become commonplace. The current darling of the quantum computing world is D-Wave, the company that announced they had created functioning qubits many years ago and set about commercializing them. However they were unable to show substantial gains over simulations on classical computers for numerous problems, calling into question whether or not they’d actually created what they claimed to. Today however brings us results that demonstrate quantum speedup, on the order of 108 times faster than regular computers.

D-Wave 2X

For a bit of background the D-Wave 2X (the device pictures above and the one which showed quantum speedup) can’t really be called a quantum computer, even though D-Wave calls it that. Instead it’s what you’d call a quantum annealer, a specific kind of computing device that’s designed to solve very specific kinds of problems. This means that it’s not a Turing complete device, unable to tackle the wide range of computing tasks which we’d typically expect a computer to be capable of. The kinds of problems it can solve however are optimizations, like finding local maximums/minimums for a given equation with lots of variables. This is still quite useful however which is why many large companies, including Google, have purchased one of these devices.

In order to judge whether or not the D-Wave 2X was actually doing computations using qubits (and not just some fancy tricks with regular processors) it was pitted against a classical computer doing the same function, called simulated annealing. Essentially this means that the D-Wave was running against a simulated version of itself, a relatively easy challenge for a quantum annealer to beat. However identifying the problem space in which the D-Wave 2X showed quantum speedup proved tricky, sometimes running at about the same speed or showing only a mild (comparative to expectations) speedup. This brought into question whether or not the qubits that D-Wave had created were actually functioning like they said they were. The research continued however and has just recently born fruit.

The research, published on ArXiv (not yet peer reviewed), shows that the D-Wave 2X is about 100 million times faster than its simulated counterpart. Additionally for another algorithm, quantum monte carlo, a similar amount of speedup was observed. This is the kind of speedup that the researchers have been looking for and it demonstrates that D-Wave is indeed a quantum device. This research points towards simulated annealing being the best measure with which to judge quantum systems like the D-Wave 2X against, something which will help immensely with future research.

There’s still a long way to go until we have a general purpose quantum computer however research like this is incredibly promising. The team at Google which has been testing this device has come up with numerous improvements they want to make to it and developed systems to make it easier for others to exploit such quantum systems. It’s this kind of fundamental research which will be key to the generalization of this technology and, hopefully, it’s inevitable commercialization. I’m very much looking forward to seeing what the next generation of these systems bring and hope their results are just as encouraging.

Bringing the Kappa to YouTube.

If you’re looking to watch people play games live there’s really only one place to look: Twitch. It started out its life as the bastard stepchild of Justin.tv, a streaming platform for all things, however it quickly outgrew its parent and at the start of last year the company dumped the original product and dedicated itself wholly to Twitch. Various other streaming apps have popped up in its place since then but none have been able to hold a candle to Twitch’s dominant position in the game streaming market. The one platform that could however has just announced YouTube Gaming which has the potential to be the first real competitor to Twitch in a very long time.

YouTube Gaming

Whilst the product isn’t generally available yet, slated to come out sometime soon, it has already made its way into the hands of many journalists who’ve taken it for a spin. The general sentiment seems to be that YouTube has essentially copied the fundamental aspects of Twitch’s streaming service, mostly in regard to the layout and features, whilst adding in a couple of additional things which serve as bait to attract both streamers and consumers to the platform. Probably the most interesting aspects of YouTube’s platform are the things that are missing from it, namely the subscription payment system, alongside the dreaded ContentID system which will be in full force on all streams.

The main thing that will draw people to YouTube’s streaming service however is most likely the huge infrastructure that YouTube is able to draw on. YouTube has already demonstrated that it can handle the enormous amounts of traffic that live streaming can generate as they currently hold the world record for most number of streams at 8 million for the Felix Baumgartner jump back in 2012. Twitch, despite its popularity, has experienced numerous growing pains when it has attempted to scale up its infrastructure outside of the US and many have pined for a much better service. YouTube, with the Google backbone at its disposal, has the potential to deliver that however I’m not sure if that will be enough to grab a significant share of this market.

Twitch has, for better or for worse, developed a kind of culture around streaming games and has thus set a lot of expectations for what they’d want in a competing streaming product. YouTube Gaming gets most of the way there with the current incarnation of the product however the absence of a few things, like an IRC backend for chat and the paid subscriptions, could end up being the killer features that keep people away from their platform. The former is easy enough to fix, either by adopting IRC directly or simply providing better tools for managing the chat stream, however the latter isn’t likely to change anytime soon. Sure, YouTube has their one off payment system but that runs against the current community norms and thus will likely not see as much use. That then feeds into a monetization problem for streamers which is likely to deter many from adopting the platform.

All that being said however it’s good to see some competition coming to this space as it should hopefully mean more fierce innovation from both parties as they vie for more marketshare. YouTube Gaming has a massive uphill battle ahead of it if however if anyone has the capability to fight Twitch on their own ground it’s them. The next 6 months will be telling as it will show just how many are willing to convert away from the Twitch platform and whether or not it will become a sustainable product for YouTube long term.

Nexus 6: Stock Android is the Only Way to Fly.

My Xperia Z managed to last almost 2 years before things started to go awry. Sure it wasn’t exactly a smooth road for the entire time I had the phone, what with the NFC update refusing to apply every time I rebooted my phone or the myriad of issues that plagued its Android 4.4 release, but it worked well enough that I was willing to let most of those problems slide. However the last month of its life saw its performance take a massive dive and no matter what I did to cajole it back to life it continued to spurt and stutter making for a rather frustrating experience. I had told myself that my next phone would be a stock Android experience so I could avoid any potential carrier or manufacturer issues and that left me with one option: the Nexus 6. I’ve had this phone for just over a month now and I have to say that I can’t see myself going back to a non-stock experience.

Nexus 6 Box

First things first: the size. When I moved to the Xperia Z I was blown away by how big it was and figured that anything bigger would just become unwieldy. Indeed when I pulled the Nexus 6 out of the box it certainly felt like a behemoth beside my current 5″ device however it didn’t take me long to grow accustomed to the size. I attribute this mostly to the subtle design features like the tapered edges and the small dimple on the back where the Motorola logo is which make the phone both feel thinner and more secure in the hand than its heft would suggest. I definitely appreciate the additional real estate (and the screen is simply gorgeous) although had the phone come in a 5″ variant I don’t think I’d be missing out on much. Still if the size was the only thing from holding you back on buying this handset I’d err on the side of taking the plunge as it quickly becomes a non-issue.

The 2 years since my last upgrade have seen a significant step up in the power that mobile devices are capable of delivering and the Nexus 6 is no exception in this regard. Under the hood it’s sporting a quad core 2.7GHz Qualcomm chip coupled with 3GB RAM and the latest Adreno GPU, the 420. Most of this power is required to drive the absolutely bonkers resolution of 2560 x 1440 which it does admirably for pretty much everything, even being able to play the recently ported Hearthstone relatively well. This is all backed by an enormous 3220mAh battery which seems more than capable of keeping this thing running all day, even when I forget that I’ve left tethering enabled (usually has about 20% left the morning after I’ve done that). The recent updates seem to have made some slight improvements to this but I didn’t have enough time before the updates came down to make a solid comparison.

Nexus 6

Layered on top of this top end piece of silicon is the wonderful Android 5.1 (codename Lollipop) which, I’m glad to say, lives up to much of the hype that I had read about it before laying down the cash for the Nexus 6. The material design philosophy that Google has adopted for its flagship mobile operating system is just beautiful and with most of the big name applications adhering to it you get an experience that’s consistent throughout the Android ecosystem. Of course applications that haven’t yet updated their design stick out like a sore thumb, something which I can only hope will be a non-issue within a year or so. The lack of additional crapware also means that the experience across different system components doesn’t vary wildly, something which was definitely noticeable on the Xperia Z and my previous Android devices.

Indeed this is the first Android device that I’ve owned that just works, as opposed to my previous ones which always required a little bit of tinkering here or there to sand off the rough edges of either the vendor’s integration bits or the oddities of the current Android release of the time. The Nexus 6 with its stock 5.1 experience has required no such tweaking with my only qualm being that newly installed widgets weren’t available for use until I rebooted my phone. Apart from that the experience has been seamless from the initial set up (which, with NFC, was awesomely simple) all the way through my daily use through the last month.

IMG_20150410_173629

The Nexus line of handsets always got a bad rap for the quality of the camera but, in all honesty, it seems about on par with my Xperia Z. This shouldn’t be surprising since they both came with one of the venerable Exmor chips from Sony which have a track history of producing high quality cameras for phones. The Google Camera software layered on top of it though is streets ahead of what Sony had provided, both in terms of functionality and performance. The HDR mode seems to actually work as advertised, as demonstrated above, being able to extract a lot more detail of a scene than I would’ve expected from a phone camera. Of course the tiny sensor size still means that low light performance isn’t its strong suit but I’ve long since moved past the point in my life where blurry pictures in a club were things I looked on fondly.

Overall I’m very impressed with the Google Nexus 6 as my initial apprehension had me worried that I’d end up regretting my purchase. I’m glad to say that’s not the case at all as my experience has been nothing short of stellar and has confirmed my suspicions that the only Android experience anyone should have is the stock one. Unfortunately that does limit your range of handsets severely but it does seem that more manufacturers are coming around to the idea of providing a stock Android experience, opening up the possibility of more handsets with the ideal software powering it. Whilst it might not be as cheap as other Nexus phones before it the Nexus 6 is most certainly worth the price of admission and I’d have no qualms about recommending it to other Android fans.

Google’s Project Fi: Breaking Down Communication Barriers.

I remember when I travelled to the USA back in 2010 I figured that wifi was ubiquitous enough now that I probably wouldn’t have to worry about getting a data plan. Back then that was partly true, indeed I was able to do pretty much everything I needed to for the first two weeks before needing Internet on the go became something of a necessity. Thankfully that was easily fixed by getting a $70, prepaid plan from T-Mobile which had unlimited everything which was more than enough to cover the gap. Still that took a good few hours out of my day just to get that sorted and since then I’ve always wanted a universal mobile plan that didn’t cost me the Earth.

Today Google has announced just that.

Google Project Fi

Not to be confused with Google’s other similar endeavour Project Fi is a collaboration between Google and numerous cellular providers to give end users a single plan that will work for them across 120 countries. Fi enabled handsets, of which there are currently only one: the Nexus 6, are able to switch between wifi and a multitude of local cellular providers for calls, txts and, most important of all, data. This comes hand in hand with a bunch of other features like being able to check your voicemails through Google Hangouts as well as other nifty features like Google Voice. Suffice to say it sounds like a pretty terrific deal and, thankfully, remains so even when you include the pricing.

The base plan will set you back $20 which includes unlimited domestic calls (I’m assuming that means national), unlimited txts to anywhere and access to the wifi and cellular networks that are part of the service. From there you can add data onto your plan for the rate of $10 per GB which, whilst not exactly the cheapest plan around (What I currently get on Telstra for $95 would cost me $120 on Fi) does come with the added benefit of being charged in 100MB increments. So if you don’t use all your data cap by the end of the month you don’t get charged for it. The benefit here is, of course, that that data works across 120 countries than my current 1, something I would’ve made good use of back when I was travelling a lot for work.

Like many cool services however Fi will only be available to US residents to begin with as their coverage map doesn’t extend far past American border. This is most likely due to the first two providers they’ve partnered with, Sprint and T-Mobile, not having a presence elsewhere. However it looks pretty likely that Google will want to extend this partnership to carriers in other countries, mostly in the aims of reducing their underlying costs for providing data coverage overseas. The real kicker will be to see who they partner with in some countries as depending on who they choose the experience could be wildly different, something I’m sure they’re keen to avoid.

I don’t think I’d make the switch to Google Fi right now even if it was available, not at least until I had a few good reports on how their service compared to the other big providers. To be sure it’d definitely be something I’d like to have when I’m travelling especially now considering how much more I can get done on my phone compared to when I last spent a good chunk of time abroad. As my everytime provider though I’m not so sure as the features they’re currently offering aren’t enough to overcome the almost $30 price differential.

I’m sure that will change with time, however.

YouTube Now HTML5 by Default*.

Flash, after starting out its life as one of the bevy of animation plugins for browsers back in the day. has become synonymous with online video. It’s also got a rather terrible reputation for using an inordinate amount of system resources to accomplish this feat, something which hasn’t gone away even in the latest versions. Indeed even my media PC, which has a graphics card with accelerated video decoding, struggles with Flash, it’s unoptimized format monopolizing every skerrick of resources for itself. HTML5 sought to solve this problem by making video a part of the base HTML specification which, everyone had hoped, would see an end to proprietary plug-ins and the woes they brought with them. However the road to getting that standard widely adopted hasn’t been an easy one as YouTube’s 4 year road to making HTML5 the default shows.

youtube

Google had always been on the “let’s use an open standard” bandwagon when it came to HTML5 video which was at odds with other members of the HTML5 board who wanted to use something that, whilst being more ubiquitous, was a proprietary codec. This, unfortunately, led to a deadlock within the committee with none of them being able to agree on a default standard. Despite what YouTube’s move to HTML5 would indicate there is still no defined standard for which codec to use for HTML5 video, meaning that there’s no way to guarantee that a video you’ve encoded in one way will be viewable by HTML5 compliant browsers. Essentially it looks like a format war is about to begin where the wider world will decide the champion and the HTML5 committee will just have to play catch up.

YouTube has unsurprisingly decided to go for Google’s VP9 codec for their HTML5 videos, a standard which they fully control. Whilst they’ve had HTML5 video available for some time now as an option it never enjoyed the widespread support required in order for them to make it the default. It seems now they’ve got buy in from most of the major browser vendors in order to be able to make the switch so people running Safari 8, IE 11, Chrome and  (beta) Firefox will be given the Flash free experience. This has the potential to set up VP9 as the de facto codec for HTML5 although I highly doubt it’ll be officially crowned anytime soon.

Google has also been hard at work ensuring that VP9 enjoys wide support across platforms as there are already several major chip producers whose System on a Chip (SoC) already supports the codec. Without that the mobile experience of VP9 encoded videos would likely be extremely poor, hindering adoption substantially.

Whilst a codec that’s almost entirely under the control of Google might not have been the ideal solution that the Open Source evangelists were hoping for (although it seems pretty open to me) it’s probably the best solution we were going to get. I have not heard of the other competing standards, apart from H.264, having such widespread support as Google’s VP9 does now. It’s likely that the next few years will see many people adopting a couple standards whilst the consumers duke it out in the next format war with the victor not clear until it’s been over for a couple years. For me though I’m glad it’s happened and hopefully soon we can do away with the system hog that Flash is.

Google’s Solution to AdBlock Plus: Contributor.

I’d like to say that I’ve never run ads on my blog out of a principled stance against them but the reality is I just wouldn’t make enough out of them to justify their existence. Sure this blog does cost me a non-zero sum to maintain but it’s never been much of a burden and I wouldn’t feel right compromising the (now) good look of the website just to make a few bucks on the side. This hasn’t stopped me from wondering how I would go about making my living as a blogger, although unfortunately pretty much every road leads back to advertising. However that model might be set to change with one of Google’s latest products: Contributor.

Google Contributor

The idea behind it is simple: you select a monthly amount you want to contribute to the sites you frequent and for sites participating in the Contributor program you’ll see no ads from Google AdSense. It’s a slight tweak on the idea of services like Flattr with a much lower barrier to adoption since most people have a Google account already and most sites run AdSense in some form. You also don’t have to specify how much goes to each site you visit, Google handles that by counting up the pageviews and dividing up your monthly contribution accordingly. In a world where AdBlock Plus has become one of the most installed browser extensions this could be a way for publishers to claw back a little revenue and, of course, for Google to bump up its revenue.

This isn’t Google’s first foray into crowd funding publishers as just a few months ago they released Fan Funding for YouTube channels. That was mostly a reaction to other crowd funding services like Patreon and Subbable whereas Contributor feels like a more fully thought out solution, one that has some real potential to generate revenue for content creators. Hopefully Google will be scaling the program into a more general solution as times goes on as I can imagine a simple “pay $3 to disable all AdSense ads” kind of service would see an incredibly large adoption rate.

On the flip side though I’m wondering how many people would convert away from blocking ads completely to using Contributor or a similar service. I know those AdBlock sensing scripts that put up guilt trip ads (like DotaCinema’s Don’t Make Sven Cry one) are pretty effective in making me whitelist certain sites, but going the next step to actually paying money is a leap I’m not sure I’d make. I know it’s nothing in the grand scheme of things, $36/year is a pittance for most people browsing the Internet, but it’s still a barrier. That being said it’s a lower barrier than any of the other options available, however.

I think Contributor will be a positive thing for both publishers and consumers in the long run, it’ll just depend on how willing people are to fork over a couple bucks a month and how much of that makes its way back to the sites it supports. You’ll still need a decent sized audience to make a living off it but at least you’d have another tool at your disposal to have them support what you do. Meanwhile I and all the other aspiring small time bloggers will continue to fantasize about what it would be like to get paid for what we do, even though we know it’ll never happen.

But it could…couldn’t it? 😉

The Modular Phone Idea is Still Alive in Project Ara.

There’s two distinct schools of thought when it comes to the modular smartphone idea. The first is that it’s the way phones were meant to be made, giving users the ability to customize every aspect of their device and reducing e-waste at the same time. The other flips that idea on its head, stating that the idea is infeasible due to the limitations inherent in a modular platform and reliance on manufacturers to build components specifically for the platform. Since I tend towards the latter I thought that Project Ara, Google’s (nee Motorola’s) attempt at the idea, would likely never see the light of day but as it turns out the platform is very real and they even have a working prototype.

Project Ara PrototypeThe essence of the idea hasn’t changed much since Motorola first talked about it at the end of last year, being a more restrained version of the Phonebloks idea. The layout is the same as the original design prototypes, giving you space on the back of the unit for about 7 modular units and space on the front for a large screen and a speaker attachment. However they also showed off a new, slim version which has space for fewer modules but is a much sleeker unit overall. Google also mentioned that they were working on a phablet design as well which was interesting considering that the current prototype unit was looking to be almost phablet sized. The whole unit, dubbed Spiral 1, was fully functional including module removal and swapping so the idea has definitely come a long way since it’s initial inception late last year.

There are a few things that stand out about the device in its current form, primarily the way in which some of the blocks don’t conform to the same dimensions as other ones. Most notably you can see this with the blood oxygen sensor they have sticking out of the top however you’ll also notice that the battery module is about twice the height of anything else. This highlights one of the bigger issues with modular design as much of the heft in modern phones is due to the increasingly large batteries they carry with them. The limited space of the modular blocks means that either the batteries have significantly reduced capacity or have to be bigger than the other modules, neither of which is a particularly desirable attribute.

In fact the more the I think about Project Ara the more I feel it’s oriented towards those looking to develop hardware for mobile platforms than it is for actual phone users. Being able to develop your specific functionality without having to worry about the rest of the platform frees up a significant amount of time which can then be spent on getting said functionality into other phones. In that regards Project Ara is amazing however that same flexibility is likely what will turn many consumers off such a device. Sure, having a phone tailored to your exact specifications has a certain allure, but I can’t help but feel that that market is vanishingly small.

It will be interesting to see how the Project Ara platform progresses as they have hinted that there’s a much better prototype floating around (called Spiral 2) which they’re looking to release to hardware developers in the near future. Whilst having a proof of concept is great there’s still a lot of questions around module development, available functionality and, above all, the usability of the system when its complete. It’s looking like a full consumer version likely isn’t due out until late next year or early 2016 so we’re going to have to wait a while to see what the fully fledged modular smartphone will look like.