Windows 8 was supposed bring with it the platform by which developers could produce applications that would have consistent experiences across platforms. This came in the form of Metro (and now Modern) apps which would be powered by the WinRT framework, something which had all the right technological bells and whistles to make such a thing possible. However with the much maligned desktop experience, most of which was focused specifically at the Metro apps, the platform unification dream died a quick death. Microsoft hasn’t left that dream behind though and their latest attempt to revive it comes to us in the form of Universal Applications. This time around however they’re taking a slightly different approach: letting the developers build what they want and giving them the option of porting it directly across to the Windows platform.
Under the hood the architecture of Universal Apps is similar to that of their Metro predecessors, providing a core common set of functionality across platforms, however the difference comes in the form of developers being able to create their own platform specific code on top of the core binary. This alleviates the main issue which most people had with Metro apps of the past (I.E. they felt out of place pretty much everywhere) and allows developers to create their own UX for each platform they want to target. This coupled with the new “4 bridges” strategy, which defines a workflow for each major platform to come into the Universal App fold, means that Microsoft has a very compelling case for developers to spend time on bringing their code across.
As I talked about previously the two major smartphone platforms get their own bridge: Project Islandwood (iOS) and Project Astoria (Android). Since the first announcement it doesn’t seem that much has changed with this particular strategy however one key detail I didn’t know at the time was that you’d be able to directly import your Xcode project into Visual Studio, greatly reducing the effort required to get going. What kind of support they’ll have for Android applications, like whether or not they’ll let you import Eclipse projects, still remains to be seen unfortunately. They’ve also announced the bridge for web applications (Project Westminster) although that’s looking more and more like a modern version of ActiveX rather than something that web developers will be actually interested in pursuing.
The latest bridge to be announced is Project Centennial, a framework that will allow developers to port current Win32 based applications to the Universal platform. Whilst this likely won’t be the end game to solving everyone’s woes with migrating poorly coded applications onto a more modern OS (App-V and other app virtualization technologies are the only real treatments for that) it does provide an avenue for potentially aging code bases to be revamped for a new platform without a herculean amount of effort. Of course this means that you’ll need both the original codebase and a willingness to rework it, both things which seem to be rare for old corporate applications that can’t seem to die gracefully. Still another option is always welcome, especially if it drives further adoption of the Universal Platform.
Universal apps seem to have all the right makings for a revolutionary platform however I can’t help but take a reserved position after what happened with WinRT and Modern Apps. Sure, Windows 10 is likely shaping up to be the Windows 7 to the ills of Windows 8, but that doesn’t necessarily mean that all the technological innovations that come along with it will be welcomed with open arms. At least now the focus is off building a tablet/mobile like experience and attempting to shoehorn it in everywhere, something which I believe is behind much of the angst with Windows 8. It’ll likely be another year before we’ll know one way or the other and I’m very keen to see how this pans out.
Make no mistake; renewables are the future of energy generation. Fossil fuels have helped spur centuries of human innovation that would have otherwise been impossible but they are a finite resource, one that’s taking an incredible toll on our planet. Connecting renewable sources to the current energy distribution grid only solves part of the problem as many renewables simply don’t generate power at all times of the day. However thanks to some recent product innovations this problem can be wholly alleviated and, most interestingly, at a cost that I’m sure many would be able to stomach should they never have to pay a power bill again.
Thanks to the various solar incentive schemes that have run both here in Australia and other countries around the world the cost of solar photovoltaic panels has dropped considerably over the past decade. Where you used to be paying on the order of tens of dollars per kilowatt today you can easily source panels for under $1 per kilowatt with the installation cost not being much more than that. Thus what used to cost tens of thousands of dollars can now be had for a much more reasonable cost, something which I’m sure many would include in a new build without breaking a sweat.
The secret sauce to this however comes to us via Tesla.
Back in the early days of many renewable energy incentive programs (and for some lucky countries where this continues) the feed in tariffs were extremely generous, usually multiple times the price of a kilowatt consumed off a grid. This meant that most arrays would completely negate the energy usage of a house, even with only a short period of energy duration. However most of these programs have been phased out or reduced significantly and, for Australia at least, it is now preferable to use energy generated rather than to offset your grid consumption. However the majority of people with solar arrays aren’t using energy during peak times, significantly reducing their ROI. The Tesla Powerwall however shifts that dynamic drastically, allowing them to use their generated power when they most need it.
Your average Australian household uses around 16KW/h worth of electricity every day something which a 4KW photovoltaic system would be able to cover. To ensure that you had that amount of energy on tap at any given moment you’d probably want to invest in both a 10KW and 7KW Powerwall which could both be fully charged during an average day. The cost of such a system, after government rebates, would likely end up in the $10,000 region. Whilst such a system would likely still require a grid connection in order to smooth out the power requirements a little bit (and to sell off any additional energy generated during good days) the monthly power bill would all but disappear. Just going off my current usage the payback time for such a system is just on 6 years, much shorter than the lives of both the panels and the accompanying batteries.
I don’t know about you but that outlay seems like a no-brainer, especially for any newly built house. The cost of such a system is only going to go down with time as more consumers and companies increase their demand for panels and, hopefully, products like the Tesla Powerwall. Going off grid like this used to be in the realms of fantasy and conspiracy theorists but now the technology has been consumerised to the point where it will be soon available to anyone who wants it. If I was running a power company I’d be extremely worried as their industry is about to be heavily disrupted.
Back in the day it didn’t take much for me to get excited about a new technology. The rapid progressions we saw from the late 90s through to the early 2010s had us all fervently awaiting the next big thing as it seemed nearly anything was within our grasp. The combination of getting older and being disappointed a certain number of times hardened me against this optimism and now I routinely attempt to avoid the hype for anything I don’t feel is a sure bet. Indeed I said much the same about HP’s The Machine last year and it seems my skepticism has paid dividends although I can’t say I feel that great about it.
For the uninitiated HP’s The Machine was going to be the next revolutionary step in computing. Whilst the mockups would be familiar to anyone who’s seen the inside of a standard server those components were going to be anything but, incorporating such wild technologies as memristors and optical interconnects. What put this above many other pie in the sky concepts (of which I include things like D-Wave’s quantum computers as the jury is still out on whether or not they’re providing a quantum speedup) is that it was based on real progress that HP had made in many of those spaces in recent years. Even that wasn’t enough to break through my cynicism however.
And today I found out I was right, god damnit.
The reasons cited were ones I was pretty sure would come to fruition, namely the fact that no one has been able to commercialize memristors at scale in any meaningful way. Since The Machine was supposed to be almost solely based off of that technology it should be no surprise that it’s been canned on the back of that. Now instead of being the moonshot style project that HP announced last year it’s instead going to be some form of technology demonstrator platform, ostensibly to draw software developers across to this new architecture in order to get them to build on it.
Unfortunately this will likely end up being not much more than a giant server with a silly amount of RAM stuffed into it, 320TB to be precise. Whilst this may attract some people to the platform out of curiosity I can’t imagine that anyone would be willing to shell out the requisite cash on the hopes that they’d be able to use a production version of The Machine sometime down the line. It would be like the Sony Cell processor all over again instead of costing you maybe a couple thousand to experiment with it you’d be in the tens of thousands, maybe hundreds, just to get your hands on some experimental architecture. HP might attempt to subsidise that but considering the already downgraded vision I can’t fathom them throwing even more money at it.
HP could very well turn around in 5 or 10 years with a working prototype to make me look stupid and, honestly, if they did I would very much welcome it. Whilst predictions about Moore’s Law ending happen at an inverse rate to them coming true (read: not at all) it doesn’t mean there isn’t a few ceilings we’ve seen on the horizon that will need to be addressed if we want to continue this rapid pace of innovation. HP’s The Machine was one of the few ideas that could’ve pushed us ahead of the curve significantly and its demise is, whilst completely expected, still a heart wrenching outcome.
Consumer electronics vendors are always looking for the next thing that will convince us to upgrade to the latest and greatest. For screens and TVs this use to be a race of resolution and frame rate however things began to stall once 1080p became ubiquitous. 3D and 4K were the last two features which screen manufacturers used to tempt us although neither of them really proved to be a compelling reason for many to upgrade. Faced with flagging sales the race was on to find another must-have feature and the result is the bevy of curved screens that are now flooding the market. Like their predecessors though curved screens don’t provide anything that’s worth having and, all things considered, might be a detrimental attribute.
You’d be forgiven for thinking that a curved screen is a premium product as they’re most certainly priced that way. Most curved screens usually tack on an extra thousand or two over an equivalent flat and should you want any other premium feature (like say it being thin) then you’re going to be paying some serious coin. The benefits of a curved screen, according to the manufacturers, is that they provide a more theatrical experience, making the screen appear bigger as more of it is in your field of view. Others will say that it reduces picture distortion as objects in the middle of a flat screen will appear larger than those at the edge. The hard fact of the matter is that, for almost all use cases, none of these attributes will be true.
As Ars Technica demonstrated last year the idea that a curved screen can have a larger apparent size than its flat counterpart only works in scenarios that aren’t likely to occur with regular viewing. Should you find yourself 3 feet away from your 55″ screen (an absolutely ludicrous prospect for any living room) then yes, the curve may make the screen appear slightly larger than it actually is. If you’re in a much more typical setting, I.E. not directly in front of it and at a more reasonable distance, then the effect vanishes. Suffice to say you’re much better off actually buying a bigger set than investing in a curved one to try and get the same effect.
The picture distortion argument is similarly flawed as most reviewers report seeing increased geometric distortions when viewing content on a curved screen. The fundamental problem here is that the content wasn’t created with a curved screen in mind. Cameras use rectilinear lenses to capture images onto a flat sensor plane, something which isn’t taken into account when the resulting image is displayed on a curved screen. Thus the image is by definition distorted and since none of the manufacturers I’ve seen talk about their image correction technology for curved screens it’s safe to assume they’re doing nothing to correct it.
So if you’ve been eyeing off a new TV upgrade (like I recently have) and are thinking about going curved the simple answer is: don’t. The premium charged for that feature nets no benefits in typical usage scenarios and is far more likely to create problems than it is to solve them. Thankfully there are still many great flat screens available, typically with all the same features of their curved brethrens for a much lower price. Hopefully we don’t have to wait too long for this fad to pass as it’s honestly worse than 3D and 4K as they at least had some partial benefits for certain situations.
Fiber is the future of all communications, that’s a fact that any technologist will be able to tell you. Whilst copper is still the mainstay for the majority its lifetime is limited as optics are fast approaching the point where they’re feasible for everything. However even fiber has its limits, one that some feel we were going to hit sooner rather than later which could cause severe issues for the Internet’s future. However new research coming out of the University of California, San Diego paves the way for boosting our fiber network’s bandwidth significantly.
Today’s fiber networks are made up of long runs of fiber optic cable interspersed with things called repeaters or regenerators. Essentially these devices are responsible for boosting up the optical signal which becomes degraded as it travels down the fiber. The problem with these devices is that they’re expensive, add in latency and are power hungry devices, attributes that aren’t exactly desirable. These problems are born out of a physical limitation of fiber networks which puts an upper limit on the amount of power you can send down an optical cable. Past a certain point the more power you put down a fiber the more interference you generate meaning there’s only so much you can pump into a cable before you’re doing more harm than good. The new research however proposes a novel way to deal with this: interfere with the signal before it’s sent.
The problem with interference that’s generated by increasing the power of the signal is that it’s unpredictable meaning there’s really no good way to combat it. The researchers however figured out a way of conditioning the signal before it’s transmitted which allows the interference to become predictable. Then at the receiving end they’ve used what they’re calling “frequency combs” to reverse the interference on the other end, pulling a useful signal out of interference. In the lab tests they were able to send the signal over 12,000KM without the use of a repeater, an absolutely astonishing distance. Using such technology could drastically improve the efficiency of our current dark fiber networks which would go a long way to avoiding the bandwidth crunch.
It will be a little while off before this technology makes its way into widespread use as whilst it shows a lot of promise the application within the lab falls short of a practical implementation. Current optical fibers carry around 32 different signals whereas the system that the researchers developed can currently only handle 5. Ramping up the number of channels they can support is a non-trivial task but at least it’s engineering challenge and not a theoretical one.
I had grand ideas that my current PC build would be all solid state. Sure the cost would’ve been high, on the order of $1500 to get about 2TB in RAID10, but the performance potential was hard to deny. In the end however I opted for good old fashioned spinning rust mostly because current RAID controllers don’t do TRIM on SSDs, meaning I would likely be in for a lovely performance downgrade in the not too distant future. Despite that I was keenly aware of just how feasible it was to go full SSD for all my PC storage and how the days of the traditional hard drive are likely to be numbered.
Ever since their first commercial introduction all those years ago SSDs have been rapidly plummeting in price with the most recent drop coming off the back of a few key technological innovations. Whilst they’re still an order of magnitude away from traditional HDDs in terms of cost per gigabyte ($0.50/GB for SSD, $0.05/GB for HDD) the gap in performance between the two is more than enough to justify the current price differential. For laptops and other portable devices that don’t require large amounts of onboard storage SSDs have already become the sole storage platform in many cases however they still lose out for large scale data storage. That differential could come to a quick close however, although I don’t think SSDs’ rise to fame will be instantaneous past that point.
One thing that has always plagued SSDs is the question around their durability and longevity as the flash cells upon which they rely have a defined life in terms of read and write cycles. Whilst SSDs have, for the most part, proven reliable even when deployed at scale the fact is that they’ve really only had about 5 or so years of production level use to back them up. Compare that to hard drives which have track records stretching back decades and you can see why many enterprises are still tentative about replacing their fleet en-masse; We just don’t know how the various components that make up a SSD will stand the test of time.
However concerns like that are likely to take a back seat if things like a 30TB drive by 2018 come to fruition. Increasing capacity on traditional hard drives has always proven to be a difficult affair as there’s only so many platters you can fit in the standard space. Whilst we’re starting to see a trickle of 10TB drives into the enterprise market they’re likely not going to be available at a cost effective point for consumers anytime soon and that gives a lot of leeway to SSDs to play catchup to their traditional brethren. That means cost parity could come much sooner than many anticipated, and that’s the point where the decision about your storage medium is already made for the consumer.
We likely won’t see spinning rust disappear for the better part of a decade but the next couple years are going to see something of a paradigm shift in terms of which platform is considered before another. SSDs already reign supreme as the drive to have your operating system residing on, all they require now is a comparative cost per gigabyte to graduate beyond that. Once we reach that point it’s likely to be an inflection point in terms of the way we store our data and, for consumers like us, a great time to upgrade our storage.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.
I understand that a basic understanding of circuit fundamentals isn’t in the core curriculum for everyone but the lack of knowledge around some electrical phenomena really astounds me. Whilst most people understand the idea of radio waves, at least to the point of knowing that they power our wireless transmissions and that they can be blocked by stuff, many seem to overestimate the amount of power that these things carry. This misunderstanding is what has led several questionable Kickstarter campaigns to gain large amounts of funding, all on the back of faulty thinking that simply doesn’t line up with reality. The latest incarnation of this comes to us in the form of the Nikola Phone Case which purports to do things that are, simply, vastly overblown.
The Nikola Phone Case states that it’s able to harvest the energy that your phone “wastes” when it’s transmitting data using it’s wireless capabilities. They state that your phone uses a lot of power to transmit these signals and that only a fraction of these signals end up making their way to their destination. Their case taps into this wasted wireless signal and then captures it, stores it and then feeds it back into your phone to charge its battery. Whilst they’ve yet to provide any solid figures, those are forthcoming in the next couple weeks according to the comments section, they have a lovely little animated graph that shows one phone at 70% after 8 hours (with case) compared to the other at 30% (without case). Sounds pretty awesome right? Well like most things which harvest energy from the air it’s likely not going to be as effective as its creators are making out to be.
For starters the the idea hinges on tapping into the “wasted” energy which implies that it doesn’t mess with the useful signal at all. Problem is there’s really no way to tell which is useful signal and which isn’t so, most likely, the case simply gets in the way of all signals. This would then lead to a reduction in signal strength across all radios which usually means that the handset would then attempt to boost the signal in order to improve reception, using more power in the process. The overall net effect of this would likely be either the same amount of battery life or worse, not the claimed significant increase.
There’s also the issue of battery drain for most smartphones devices not being primarily driven by the device’s radio. Today’s smartphones carry processors in them that are as powerful as some desktops were 10 years ago and thus draw an immense amount of power. Couple that with the large screens and the backlights that power them and you’ll often find that these things total up to much more battery usage than all of the radios do. Indeed if you’re on an Android device you can check this for yourself and you’ll likely find that the various apps running in the background are responsible for most of the battery usage, not your radio.
There’s nothing wrong with the Nikola Phone Case at a fundamental technological level, it will be able harvest RF energy and pump it back into your phone no problem, however the claims of massive increases in battery life will likely not pan out to be true. Like many similar devices that have come before it they’ve likely got far too excited about an effect that won’t be anywhere near as significant outside the lab. I’ll be more than happy to eat my words if they can give us an actual, factual demonstration of the technology under real world circumstances but until then I’ll sit on this side of the fence, waiting for evidence to change my mind.
I’ve always appreciated the simple beauty of Zen gardens, mostly from afar as my natural instinct is to run directly to the perfectly groomed sand and mess it all up. That being whilst I may have kindled an interest in gardening recently (thanks to my wife giving me some chilli plants for Christmas) I have very little interest in creating one of these myself, even of the desktop variety. The video below however demonstrates a kind of Zen garden that I could very well see myself spending numerous hours, mostly because it’s driven by some simple, but incredibly cool, science.
On the surface it seems like a relatively simple mechanism of action, two steel balls roll their away across the sand and produce all sorts of patterns along the way. The reality of it is quite a bit more interesting however as, if you watch closely, you can see that the two steel balls’ motion is linked together around a single point of motion. This is because, as Core77’s post shows, there’s only a single arm underneath the table which most likely houses 2 independent magnets that are able to slide up and down its length. In all honesty this is far more impressive to me than how I would’ve approached the problem as it makes producing the complex patterns that much more challenging. If it was left to me I would’ve had a huge array of magnets underneath the surface, but that seems like cheating after seeing this.
When you think of Apple what kind of company do you think they are? Many will answer that they’re a technology company, some a computing company, but there are precious few who recognise them as a hardware company. Whilst they may run large non-hardware enterprises like the App Store and iTunes these all began their lives as loss-leaders for their respective hardware platforms (the iPhone and the iPod). OSX didn’t start out its life in that way, indeed it was long seen as the only competitor to Windows with any significant market share, however it has been fast approaching the same status as its iCompanions for some time now and the recently announced El Capitan version solidifies its future.
I haven’t covered an OSX version in any detail since I mentioned OSX Lion in passing some 4 years ago now and for good reason: there’s simply nothing to write about. The Wikipedia entry on OSX versions sum up the differences in just a few lines and for the most part the improvements with each version come down to new iOS apps being ported and the vague “under-the-hood” improvements that come with every version. The rhetoric from Apple surrounding the El Capitan release even speaks to this lack of major changes directly, stating things like “Refinements to the Mac Experience” and “Improvements to System Performance” as their key focus. Whilst those kinds of improvements are welcome in any OS release the fact that the last 6 years haven’t seen much in the way of innovation in the OSX product line is telling of where it’s heading.
The Mountain Lion release of OSX was the first indication that OSX was likely heading towards an iLine style of product with many iOS features making their way into the operating system. Mavericks continued this with the addition of another 2 previously iOS exclusives and Yosemite bringing Handoff to bridge between other iOS devices. El Capitan doesn’t make any specific moves forward in this regard however it is telling that Apple’s latest flagship compute product, the revamped and razor thin Macbook, is much more comparable to an upscale tablet than it is to an actual laptop. In true Apple fashion it doesn’t really compare with either, attempting to define a new market segment in which they can be the dominant player.
If it wasn’t obvious what I’m getting at here is that OSX is fast approaching two things: becoming another product in the iOS line and, in terms of being a desktop OS, irrelevance. Apple has done well with their converged ecosystem, achieving a level of unification that every other ecosystem envies, however that strategy is most certainly focused on the iOS line above all else. This is most easily seen in the fact that the innovation happens on iOS and then ported back to OSX. This is not something that I feel Apple would want to continue doing long into the future. Thus it would seem inevitable that OSX would eventually pass the torch to iOS running on a laptop form factor, it’s just a matter of when.
This is not to say it would be a bad thing for the platform, far from it. In terms of general OS level tasks OSX performs more than adequately and has done so for the better part of a decade. What it does mean however is that the core adherents which powered Apple’s return from the doldrums all those years ago are becoming a smaller part of Apple’s overall strategy and will thus recieve much less love in the future. For Apple this isn’t much of a concern, the margins on PCs (even their premium models), have always been slim when compared to their consumer tech line. However for those who have a love for all things OSX they might want to start looking at making the transition if an iOS based future isn’t right for them.