The last time I wrote about Amazon Prime Air was almost 2 years ago to the day and back then it seemed to be little more than a flight of fancy. Back then drones, whilst still being somewhat commonplace, were still something of an emerging space especially when it came to regulations and companies making use of them. Indeed the idea instantly ran afoul of the FAA, something which Amazon was surprisingly blase about at the time. Still there had been musings of them continuing development of the program and today they’ve shown off another prototype drone that they might use in the future.
The drone is an interesting beast, capable of both VTOL and regular flight. This was most likely done to increase the effective range of the craft as traditional flight is a lot less energy intensive than 100% VTOL flight. The new prototype drone has a stated range of 16 miles (about 25KM) which you’d probably have to cut in half for the return trip. Whilst that’s likely an order of magnitude above the previous prototype they showcased 2 years ago it still means that a serviced based on them will either be very limited or Amazon is planning a massive shakeup of its distribution network.
Of course the timing of this announcement (and the accompanying video below) mere hours before the yearly Cyber Monday sale starts in earnest can’t be denied. Amazon Prime Air is undeniably a marketing tactic, one that’s worked well enough in the past to warrant them trying it again in order to boost sales on this day. On the flip side Amazon does seem pretty committed to the idea, with their various proposals for airspace usage and “dozens of prototypes” in the works, however until they start offering the service to real customers it’s going to be easy to remain skeptical.
Last time I wrote about Amazon Prime Air one of my local readers mentioned that a similar service was looking to take off here in Australia. The offering was going to be a joint effort between Flirtey, a delivery drone developer, and Zookal a local text book sale and rent service. They were targeting mid last year for their first delivery by drone however that never came to pass. Indeed an article earlier this year was all I could dredge up on the service where they still have yet to use the service commercially. To their credit Flirtey did make the first drone delivery in the USA in July this year so the technology is there it just needs to be put to use.
Whether or not something like this will see widespread adoption however is something I’m still not sure on. Right now the centralized distribution models that most companies employ simply don’t work with the incredibly limited range that most drones have. Even if the range issue could be solved I’m still not sure if it would be economical to use them, unless the delivery fees were substantially higher (and then how many customers would pay for that?). Don’t get me wrong, I still think it’d be incredibly cool to get something delivered by drone, but at this point I’m still not 100% sold on the idea that it can be done economically.
The current torrent of AAA releases makes for an overwhelming selection for a reviewer like myself. There are titles that we’re simply expected to play, like one I shall not mention until this time next week, and others which only appeal to a very certain demographic. Indeed that’s exactly why all these games can all launch at the same time and still find the financial success they were looking for. However most of these games are incredible time sinks and my one review a week schedule simply can’t accomodate them. Thus I look to the poor indies who found themselves in the mix with the giants of the industry to fill the gap with their shorter, more concise experiences. Refunct certainly fills the first requirement aptly clocking in as probably the quickest game I’ve ever fully completed.
Refunct is a first person platforming game, something which I’m sure strikes fear into the hearts of all gamers. You see platforming in first person has always been somewhat of a hit and miss affair, both in the literal and figurative sense. Judging distances in first person mode is a flawed endeavour as there’s no real way (apart from trial and error) to figure out how far your character can move or whether or not you jumped too early. Refunct however has billed itself as a game that everyone can play, meaning that the developer must think they’ve addressed the first person platforming problems. To an extent they have, but not because of any new or clever mechanics.
Instead Refunct is incredibly generous with its hit detection when it comes to near misses on ledges and the height that you character can jump. For the first few jumps I was expecting the normal platforming behaviour, you don’t make it you fall flat on your face, however if you’re within a certain tolerance your character will pull themselves up. Later on, when its revealed that you can push yourself off walls, you can exploit this somewhat as even full misses can result in a successful jump if you bounce yourself off the wall. For new players this means that jumping puzzles, which typically rely on pinpoint precision, are much more forgiving. For power players like myself though it’s an easily exploitable system that significantly reduces the play time of Refunct.
Refunct uses the Unreal engine and, judging by the world effects it makes generous use of, must be running on the latest version of said engine. Whilst it’s one of the most basic games you’re ever likely to come across, pretty much everything is a block or cylinder, it does look incredibly nice. The subtle day/night cycles, clouds gently passing by in the background and the accompanying soundtrack do make for a rather pleasant experience. That being said in terms of graphics it’s not much more than a demonstration of Unreal’s inbuilt capabilities. I guess where I’m going this is: I like the way Refunct looks but it feels like a low amount of effort went into making it look that way.
From a game play perspective Refunct is pretty seamless with the generous platforming mechanic smoothing over what could have been many frustrating moments. At the same time however this means that for any kind of regular gamer Refunct is likely to be little more than a curious distraction as I was able to complete the whole thing, to 100%, in no less than 22 minutes. For some this might be a problem, especially in the age of numerous other $2 games that give many more hours of play time, so it’s something to bear in mind if a simplistic first person platformer is sounding like something you’d like to play.
Refunct is a succinct first person platformer that is far more forgiving than its genre would lead you to believe. The generous platforming mechanics means that a wide variety of players would be able to play it start to finish without the frustration that accompanies nearly every other title that incorporates such mechanics. It simple and clean aesthetic does make for a kind of zen experience which is only heightened by it’s great soundtrack. However it is a brutally short experience, almost shorter than the time it will take many to download it. Overall I’d recommend it but only if you’re looking for an after-AAA mint to clean your pallet before you set yourself up for your next meal.
Refunct is available on PC right now for $2.99. Total play time was 22 minutes with 100% of the achievements unlocked.
There’s little doubt now that the Multi-Technology Mix was a viable path forward for the NBN. The tenants of faster, cheaper and sooner have all fallen by the wayside in one way or another. The speed guarantees were dropped very quickly as NBNCo (now known as just nbn™) came face to face with the reality that the copper network simply couldn’t support them. The cost of their solution has come into question numerous times and has shown to be completely incorrect. Worst still the subsequent cost blowouts are almost wholly attributed to the changes made by the MTM switch, not the original FTTP solution. Lastly with the delays that the FTTN trials have experienced along with the disruption to provisioning activities that were already under way there is no chance that we’ll have it sooner. Worse still it appears that the HFC network, the backbone upon which Turnbull built his MTM idea, isn’t up to the task of providing NBN services.
The leaked report shows that, in its current state, the Optus HFC network simply doesn’t have the capacity nor is it up to the standards required to service NBN customers. Chief among the numerous issues listed in the presentation is the fact that the Optus cable network is heavily oversubscribed and would require additional backhaul and nodes to support new customers. Among the other issues listed are pieces of equipment that are in need of replacement, the presence of ingress noise reducing user speeds and the complexity of the established HFC network’s multipathing infrastructure. All said the cost of remediating this network (or “overbuilding” it as they are saying) ranges from $150 million up to $800 million in addition to the capital already spent to acquire the network.
Some of the options presented to fix this solution are frankly comical, like the idea that nbn should engage Telstra to extend their HFC network to cover the areas currently serviced by Optus. Further options peg FTTP as the most expensive with FTTdp (fiber to the distribution point) and FTTN coming in as the cheaper alternatives. The last one is some horrendous mix of FTTdp and Telstra HFC which would just lead to confusion for consumers, what with 2 NBN offerings in the same suburb that had wildly different services and speeds available on them. Put simply Optus’ HFC network being in the state it is has no good solution other than the one that the original NBN plan had in mind.
The ubiquitous fiber approach that the original NBN sought to implement avoided all the issues that the MTM solution is now encountering for the simple fact that we can’t trust the current state of any of the networks deployed in Australia. It has been known for a long time that the copper network is aging and in dire need of replacement, unable to reliably provide the speeds that many consumers now demand. The HFC network has always been riddled with issues with nearly every metro deployment suffering from major congestion issues from the day it was implemented. Relying on both these things to deliver broadband services was doomed to fail and it’s not surprising that that’s exactly what we’ve seen ever since the MTM solution was announced.
Frankly this kind of news no longer surprises me. I had hoped that the Liberals would have just taken credit for the original idea that Labor put forward but they went one step further and trashed the whole thing. A full FTTP solution would have catapulted Australia to the forefront of the global digital economy, providing benefits far in excess of its cost. Now however we’re likely decades away from achieving that, all thanks to the short sightedness of a potential one term government. There really is little to hope for when it comes to the future of the NBN and there’s no question in my mind of who is to blame.
It seems that Blue Origin is ready to step out of the cloak of secrecy it has worn for so long. Once an enigmatic and secretive company they have been making many more waves as of late, setting the scene for them to become more heavily involved in the private space industry. Progress hasn’t been all that fast for them however although, honestly, it’s hard to tell with the small dribs and drabs of information they make public. Still they managed to successfully fly their current launch vehicle, New Shepard, at the end of April this year. That test wasn’t 100% successful however as, whilst the crew capsule was returned safely, the booster (which has the capability to land itself) did not fair so well and was destroyed. Today marks a pivotal moment for Blue Origin as their second flight of their New Shepard craft was 100% successful, paving the way for their commercial operations.
The New Shepard craft isn’t your typical craft that we’ve come to expect from private space companies. It’s much more alike to Virgin Galactic’s SpaceShipTwo as it’s designed for space tourists rather than transporting cargo or humans to orbital destinations. That doesn’t mean it’s any less interesting however as they’ve already demonstrated some pretty amazing technology that few other companies have been able to replicate. It’s also one of the most unusual approaches to sub-orbital tourism I’ve seen, almost being a small scale replica of a Falcon-9 with a couple unusual features that enable it to be a fully reusable craft.
A ride on a New Shepard will take you straight up at speeds of almost Mach 4 getting you to a height of just over 100KM, the universally agreed boundary of Earth and space. However not all of the rocket will be going up there with you, instead once the booster has finished its job it will disconnect from the crew capsule, allowing the remaining momentum to propel the small cabin just a little bit further. The cabin then descends back down to Earth, landing softly with the aid of your standard drag chutes that are common in capsule based craft. The booster however uses some remaining fuel to soft land itself and appears to be able to do so with rather incredible accuracy.
The final part of the video is what failed on the previous launch as they lost hydraulic pressure shortly after the craft took off. In this video though it’s clear to see the incredible engineering at work as the rocket is constantly gimbaling (moving around) the thrust in order to make sure it can land upright and in the desired location. This is the same kind of technology that SpaceX has been trialling with its recent launches, although they have the slightly harder target of a sea barge and a much larger rocket. Still the fact that Blue Origin have it working, even on a smaller scale, says a lot for the engineering expertise that’s behind this rocket.
I’m hopeful that Blue Origin will continue being a little more public as, whilst they might be playing with the big boys just yet, they’ve got all the makings of yet another great private space company. The New Shepard is a fascinating design that has proven to be highly capable with its second test flight and I have no doubt that multiple more are scheduled for the near future. It will be very interesting to see if the design translates well to their proposed Very Big Brother design as that could rocket (pun intended) them directly into competition with SpaceX.
It certainly is a great time to be a space nut.
The blending of organic life and electronics is still very much in its nascent stages. Most of the progress made in this area is thanks to the adaptability of the biological systems we’re integrating with, not so much the technology. However even small progress in this field can have wide reaching ramifications, sometimes enough to dramatically reframe the problem spaces we work in. One such small step has been made recently by a team from the Linköping University in Sweden as they have managed to create working electronic circuits within roses.
The research, born out of the Laboratory of Organic Electronics division of the university, experimented with ways of integrating electronics into rose plants so they could monitor, and potentially influence, the growth and development of the plant. To do this they looked at infusing the rose with a polymer that, once ingested into the plant, would form a conductive wire. Attempts with many polymers simply resulted in the death of the plant as they either poisoned it or blocked the channels the plant used to carry nutrients. However one polymer, called PEDOT-S:H, was readily taken up by the roses and didn’t cause any damage to the plant. Instead it formed a thin layer within the xylem (one of the nutrient transport mechanisms within plants) that produced a conductive hydrogel wire up to 10cm long.
The researchers then used this wire to create some rudimentary circuits within the plant’s xylem structure. The wire itself, whilst not being an ideal conductor, was surprisingly conducive to current with a contact resistance of 10KΩ. To put that in perspective the resistance of human skin can be up to 10 times more than that. Using this wire as a basis the researchers then went on to create a transistor by connecting source, drain and gate probes. This transistor worked as expected and they went one step further to create logic gates, demonstrating that a NOR gate could be created using the hydrogel wire as the semiconducting medium.
This kind of technology has potential to revolutionize the way that we monitor and influence plant growth and development. Essentially what this allows us to do is create circuitry within living plants, using their own cellular structures as a basis, that can act as sensors or regulators for the various chemical processes that happen within them. Of course there’s still a lot of work to be done in this area, namely modelling the behaviour of this organic circuitry in more depth to ascertain what kind of data we can get and processes we can influence. Suffice to say it should become a very healthy area of research as there are numerous potential applications.
Widespread vaccination programs have been the key to driving many crippling diseases to extinction. This boils down to one, simple, irrefutable fact: they work and are incredibly safe. However the anti-vaccination movement, which asserts all sorts of non-scientific dribble, has caused vaccine rates to drop to levels where herd immunity starts to become compromised. This presents a number of challenges as unvaccinated children and adults are not only a threat to themselves but to others who have contact with them. Indeed the problem may be worse than first thought as it appears that even among those who do vaccinate the completion rate is low, with 1 in 3 two year olds in the USA not having completed the recommended vaccination course.
The study, published RTI International (a non-profit research institute based in North Carolina), showed that up until a child was 2 years old the state of their vaccinations was quite fluid. Indeed the vast majority of children weren’t compliant with the required vaccination schedule with most of them receiving a dose outside the recommended window. Upon reaching approximately 24 months of age however most had caught up with the required schedule although a staggering 33% of them were still non-compliant at this age. This might not seem like much of an issue since the majority do eventually get their vaccinations however there are sound scientific reasons for the scheduling of vaccines. Ignoring them has the potential to limit, or completely negate, their efficacy.
The standard vaccine schedule has been developed to maximise the efficacy of vaccines and also to reduce the risk that, should a child contract that disease, potentially life threatening complications are reduced or eliminated. The pertussis (whooping cough) vaccine is estimated to have an extremely high efficacy rate in young children, up to 95%, but that begins to drop off rapidly if the vaccine is administered later in life. Similar efficacy slopes are seen in other childhood disease vaccines such as the combined MMR vaccine. At the same time these vaccines are administered around the time when the potential impacts of the disease are at their greatest. Missing a vaccine at that point runs the risk of severe complications should the disease be contracted at that point.
It’s unsurprising that the study found that the western states had the lowest rates of vaccination as that’s where the anti-vaccination movement has been most active. Just this year there was an outbreak of measles there and the year before that there was a whooping cough epidemic. Interestingly the southern states had the highest rates of vaccination as shown by the snippet of this infographic above. Whilst the anti-vaccination movement is undeniably an influence in the hodge-podge vaccination approach that seems prevalent the blame here lies solely on the parents who aren’t adhering to the vaccination schedule better.
It’s understandable that some of these things can slip as the challenges of being a parent are unending but when it comes to their health there’s really no other competing priority. For parents this means that they’ll need to pay better attention to their doctor’s advice and ensure that the vaccine schedule is adhered to more closely. Additionally the government could readily help in alleviating this issue by developing better reminder systems, ones that are more in tune with the modern parent’s lives. Hopefully these statistics alone will be enough to jar most into action.
The 3 year, 3 developer cycle that Call of Duty switched to has meant that it’s been a little longer between drinks for Treyarch, once considered the poor step child to Inifinity Ward. For players like me, who enjoyed Treyarch’s slightly more story oriented style for the single player, it’s been a bit of a wait but all hopes were that the extra polish would be worth it. After spending the last week with Call of Duty: Black Ops III I can definitely say the wait has been worth it, although Treyarch might need to come down from the giant ivory tower that they’ve crafted themselves.
The year is 2065 and you’ve been sent to rescue hostages from the NCR, the latest terrorist organisation to begin its war against the western world. You, along with your partner Hendricks, have been sent to Ethopia to rescue hostages and a VIP who’s been captured by this group. Whilst the extraction was a success you were left behind and mortally wounded by one of the NCR’s combat robots. You’re transported back to the Coalescence HQ for emergency medical treatment, bestowing upon you cybernetic abilities that elevate your combat capabilities far beyond that of any normal soldier. What follows is your exploits as a CIA black operative following a terrible conspiracy that goes all the way to the top.
Considering that Black Ops 3 was released on nearly all platforms (including 2 last gen ones) it’s great to see it able to use all the grunt of a modern PC to render some truly stunning graphics. On first release though this was unfortunately at the severe cost of performance as smoke and other particle heavy systems would drag an otherwise buttery smooth experience down to a slideshow. Thankfully this was a bug and was fixed in a patch last week, allowing me to once again ratchet all the settings up to maximum. Unlike other Call of Duty titles though you’ll rarely have any time to stop and take in the view as the game is all about action all the time (save for the last section which I’ll dive into more later).
Black Ops 3 is the definition of a corridor shooter, putting you in tight spaces with hordes of enemies that you’ll need to mow down in order to progress. Like Advanced Warfare before it though there’s a few extra mechanics thrown into the mix to keep things fresh, most of which come in the form of various powers granted to you by your cyber augments. Also, unlike most Call of Duty games where your load out is specified for you, Black Ops 3 gives you the option to build out your own kit for each mission. You’re even given a briefing panel which allows you to judge which kit would be best for each engagement. Apart from that (and the multiplayer, of course) there’s not much more to say about Black Ops 3 as it really does feel like Advanced Warfare with the trademark Treyarch psychological twist.
The buttery smooth, fast paced FPS combat that’s a hallmark of the Call of Duty series is back once again in Black Ops 3. The additional enhancements you’re given as part of your cybernetic upgrades goes a long way to alleviating some of the issues that plagued previous instalments in this series. Notably this includes things like target highlighting, “danger zones” shown on the floor to give you an idea of what might happen if you go there and the vast array of powers you have to devastate your enemy. However one piece of advice I’ll give is that, if you’re just looking to enjoy the single player, avoid the higher difficulties. Instead of making the enemies tougher it essentially makes you weaker with the hardest difficulty allowing any enemy to one shot you. Sure that does provide some form of challenge but, honestly, it’s just more tedious than anything.
For all its polish though there are still some rough bits in the single player. Quite often enemies will be able to shoot through or glitch through walls which, if you’re playing on anything above normal difficulty, will mean your instant demise. This became painfully clear on the final mission when you’re storming the last building and mechs, flying drones and anything else would just pass through terrain to get to you. I can handle getting nailed by unseen targets, that just means you need to be aware of where they are for next time around, but when you literally can’t do anything to stop them it really does grate on you.
The story retains Treyarch’s signature psychological thriller style, this time around with a sci-fi twist. To begin with it’s interesting as the characters deal with the implications of technology and the enhancements it brings them. Things start to come unstuck a bit as they dive deeper into the (highly predictable) conspiracy aspects of it and it comes completely unglued towards the end when the symbolism gets dialled up to 11. Probably the worst part about it though is that, if you read a couple specific things in game, the whole thing is basically naught anyway. In all honesty it started off strong before it tried to M. Night Shyamalan everything and completely disappeared up its own ass with that one piece of text.
The multiplayer is your mostly standard Call of Duty affair with levels, unlocks and customizations galore. It uses the familiar “choose 10” system, allowing you to create a character that fits your play style perfectly. The biggest change that comes with Black Ops 3 is the inclusion of “specialist” classes which are essentially base character models that come with abilities. These can be either a weapon, which can be incredibly devastating when used right, or an ability which usually gives you a tactical advantage over the enemy. This combined with Call of Duty’s typical huge array of weaponry makes for some incredibly varied combat, something which can be a bit overwhelming when you first start out.
Probably my only gripe is that the levelling is a bit too slow for casual-core players like myself. I’ve played about 4 hours at this point and my main weapon, the Kuda SMG, is no where near unlocking all the mods that I want to use. This means that, for nearly all of my current multiplayer time, I’ve been using the Vanguard starting class since it has a fully customized Kuda as part of the loadout. Treyarch is aware of this and is making up for it by making this weekend a double XP weekend but that feels like a bandaid solution on the problem honestly. Having a rested system or something similar would make the experience a lot better for players like myself as otherwise the longevity of the multiplayer will be severely limited.
Call of Duty: Black Ops III maintains the level of quality we’ve come to expect from the series, adding the Treyarch signature psychological thriller style to the future combat motif that has permeated the last few instalments. The single player is pretty much as you’d expect, maintaining the same fluid FPS experience even if it does overstay its welcome a little bit too long towards the end. The multiplayer, whilst suffering from a rather slow levelling system, is just as good as it ever was. As always the Call of Duty series might not be for everyone but for those of us who enjoy a spectacle, along with a few solid hours of multiplayer fun, then there’s really no other title to turn to right now other than Black Ops III.
Rating: 8.75 / 10
Call of Duty: Black Ops III is available on PC, PlayStation3, PlayStation4, Xbox360 and XboxOne right now for $79, $59, $79, $59 and $79 respectively. Game was played on PC with a total of 43% of the achievements unlocked.
Friction welding is a fascinating process, able to join dissimilar metals and plastics together with bonds that are far stronger than their welded counterparts. As far as I was aware though it was limited to inorganic materials, mostly because other materials would simply catch fire and not fuse together. As it turns out that’s not the case and recent research has shown that it’s possible to friction weld pieces of wood together in the much the same way as you would metal.
What’s particularly interesting about the process is how similar it is to friction welding of metals or plastics. Essentially the rubbing of the two surfaces together causes the interfaces to form a viscous film that mixes together and, when the friction is stopped, fuse together. For the above video you can see some of the film produced escaping through the sides due to the large amount of pressure that’s applied to ensure the weld is secured. Like all other kinds of friction welding the strength of the joint is dependant on a number of factors such as pressure, period of the friction motion and duration of the weld. As it turns out friction welding of wood is actually an active area of research with lots of investigation into how to create the strongest joints.
Even cooler is the fact that some researchers have developed a technique that allows the welds to be done with no fibres being expelled out the sides. This means that there was no charring of the interface medium, enabling the resulting weld to be even stronger and much more resistant to intrusion by moisture. Whilst you’re not going to see a sub built of friction welded wood any time soon it does mean that, potentially, your house could be built without the use of fasteners or joiners and the first rain that came through wouldn’t make it all come unstuck.
Don’t think you can just go off and rub two pieces of wood together though, the frequency required to fuse the wood was on the order of 150Hz and a pressure of 1.9MPa, far beyond the capabilities of any human to produce. Still it’s not unthinkable that a power tool could be made to do it, although I lack the mechanical engineering experience to figure out how that would be done. I’m sure someone will figure that out though and I can’t wait to see what kind of structures can be made using friction welding techniques.
Mars is the most studied planet other than our own, currently playing host to no less than 7 different craft currently operating both in orbit and on its surface. It’s of interest to us due to its similarity to Earth, giving us an insight into how certain processes can affect planets differently. Mars is also the easiest of our sister planets to explore, being relatively close and having an atmosphere that won’t outright destroy craft that dare land on it. Still for all that research it still manages to surprise us, most recently by revealing the fact that liquid water still flows on it. We’re still far from done with it however and the MAVEN craft has just revealed some key insights into Mars’ atmosphere and the history behind its current state.
Mars’ atmosphere is extremely thin, over 100 times less dense than the atmosphere here on Earth. To put that in perspective that’s about the same density as the air here is on Earth at an altitude of about 30KM, or about 3 times as high as your typical jet airliner flies. It’s also almost all carbon dioxide with a small smattering of nitrogen and other trace elements. However it wasn’t always this way as numerous studies have revealed that it must have held a much thicker atmosphere in the past. What has remained something of a mystery is just how Mars came to lose its atmosphere and whether those same processes were in effect today. MAVEN, a craft specifically designed to figure this out, has made some key discoveries and it seems that the long held belief that the sun is to blame is true.
For a planet to lose its atmosphere there’s really only two places it can go. In some cases the planet itself can absorb the atmosphere, driving chemical reactions that pull all the gases down into more solid forms. This specific scenario was investigated on Mars however the lack of the kinds of minerals we’d expect to see, mostly carbonates given Mars’ mostly carbon dioxide atmosphere, means that this was unlikely to be the case. The second way is for it to lose the atmosphere to the vacuum of space which can happen in a number of ways, usually through the planet being unable to hold onto its atmosphere. This latter theory has proved to be correct although it’s far more interesting than Mars simply being too small.
In the past Mars would have looked a lot like Earth, a small blue marble wrapped in protective gases. Back then the core of Mars was still active, generating a magnetic field much like that on Earth. However, after a time, the core began to cool and the engine behind the giant magnetic field began to fade. As this field weakened the solar wind began to erode the atmosphere, slowly stripping it away. Today Mars’ magnetic field is around 40 times weaker than Earth’s, no where near enough to stop this process which is still continuing to this day. For Mars it seems that its diminutive core was what sealed its fate, unable to sustain its protective magnetic shield from the relentless torment of our sun.
Whilst this has been the prevailing theory for some time its good to get confirmation from hard data to support it. Our two closest solar relatives, Venus and Mars, provide insights into how planets can develop and what changes produce what outcomes. Knowing things like this helps us to understand our own Earth and what impacts our behaviour might have on it. Mars might not ever see its atmosphere again but at least we now know what it might have looked like once, and where it has gone.
Long time readers will know that I’ve long held the belief that OSX and iOS were bound to merge at some point in the future. For me the reasons for thinking this are wide and varied, but it is most easily seen in ever vanishing delineation between the two hardware lines that support them. The iPad Pro was the last volley that iOS launched against its OSX brethren and, for me, was the concrete proof that Apple was looking to merge the two product lines once and for all. Some recent off-hand remarks from CEO Tim Cook convinced many of my line of thinking, enough so that Tim Cook has come out saying that Apple won’t be developing a converged Mac/iPad device.
That statement probably shouldn’t come as much of surprise given that Cook called the Surface Book “deluded” just under a week ago. Whilst I can understand that it’s every CEO’s right to have a dig at the competition the commentary from Cook does seem a little naive in this regard. The Surface has shown that there’s a market for a tablet-first laptop hybrid and there’s every reason to expect a laptop first tablet hybrid will meet similar success. Indeed the initial reactions to the Surface Book are overwhelmingly positive so Cook might want to reconsider the rhetoric he’s using on this, especially if they ever start eyeing off creating a competing device like they did with the iPad Pro.
The response about non-convergence though is an interesting one. Indeed, as Windows 8 showed, spanning a platform between all types of devices can lead to a whole raft of compromises that leaves nobody happy. However Microsoft has shown that it can be done right with Windows 10 and the Surface Book is their chief demonstrator of how a converged system can work. By distancing himself from the idea that the platforms will never meet in the middle, apart from the handful of integration services that work across both platforms, Cook limits the potential synergy that can be gained from such integration.
At the same time I get the feeling that the response might have be born out of the concern he stirred up with his previous comment about not needing a PC any more. He later clarified that as not needing a PC that’s not a Mac since they are apparently not Personal Computers. For fans of the Mac platform this felt like a clear signal that Apple feels PCs are an also ran, something that they keep going in order to endear brand loyalty more than anything else. When you look at the size of the entire Mac business compared to the rest of Apple it certainly looks that way with it making less than 10% of the company’s earnings. For those who use OSX as their platform for creation the consternation about it going away is a real concern.
As you can probably tell I don’t entirely believe Tim Cook’s comments on this matter. Whilst no company would want to take an axe to a solid revenue stream like the Mac platform the constant blurring of the lines between the OSX and iOS based product lines makes the future for them seem inevitable. It might not come as a big bang with the two wed in an unholy codebase marriage but over time I feel the lines between what differentiates either product line will be so blurred as to be meaningless. Indeed if the success of Microsoft’s Surface line is anything to go by Apple may have their hand forced in this regard, something that few would have ever expected to see happen to a market leader like Apple.