One of my favourite shows that I found out about far too late into my adult life was How It’s Made. The premise of the show is simple: they take you into the manufacturing process behind many common products, showing you how they go from their raw materials into the products we all know. Whilst I’d probably recommend skipping the episodes which show you how some of your favourite food is made (I think that’s called the Sausage Principle) the insight into how some things are made can be incredibly fascinating. However whilst everyday products can be interesting they pale in comparison to something like the following video which shows how solid aluminium wheels are created for an upcoming jet car:
I think what gets me most about this video is the amazing level of precision that they’re able to obtain using massive tools, something which usually doesn’t go together. The press seems to be able to move in very small increments and can do so at speeds that just seem to be out of this world. The gripper also seems to have a pretty high level of fidelity about it, being able to pick up an extremely malleable piece of heated aluminium without structurally deforming it. That’s only half the equation though as the operators of these machines are obviously highly skilled in their operation, being able to guide them with incredible accuracy.
In fact the whole YouTube channel dedicated to the Bloodhound SSC car is filled with engineering marvels like this from showing off the construction of the monocoque and the attached components all the way to the interior and the software they’ll be using for it. If the above video had you tingling with excitement (well, I was, but I’m strange) then I highly recommend checking them out.
In my recent review of Ubisoft Montreal’s latest game, Watch_Dogs, I gave the developers the benefit of the doubt when it came to the graphics issues that many people had raised. Demos are often scripted and sculpted in such a way as to show a game in the best light possible and so the delivered product most often doesn’t line up with people’s expectations. So since Watch_Dogs wasn’t an unplayable monstrosity I chalked it up to the hype leading us all astray and Ubisoft pulling the typical demo shenanigans. As it turns out though there’s a way to make Watch_Dogs look as good as it did in the demos and all that’s required is adding 2 files to a directory.
This mod came to everyone’s attention yesterday with dozens of screenshots plastering all the major games news outlets. A modder called TheWorse on Guru3D became obsessed with diving into the Watch_Dog code and eventually managed to unpack many of the game’s core files. After that he managed to enable many of the effects that had been present in the original E3 demo of Watch_Dogs, along with tweaking a number of other settings to great effect. The result speaks for itself (as my before and after screenshots above can attest to) with the game looking quite a lot better than it did on my first play through. The thing with this mod is that unlike other graphical enhancements like ENB, which gives us all those pretty Skyrim screenshots, this mod isn’t adding anything to the rendering pipeline, it’s just enabling functionality that’s already there. Indeed this is most strongly indicated by the mod’s size, a paltry 45KB in size.
So first things first: I was wrong. Whilst the demo at E3 was likely running on a machine far better than many PC gamers have access to this mod shows that Watch_Dogs is capable of looking a lot better than it currently is. My current PC is approaching some 3 years old now, almost ancient in gaming PC years, and it was able to run the mod with ultra graphics settings, something I wasn’t able to do previously. It could probably use a little tweaking to get the framerate a bit higher but honestly that’s just my preference for higher frame rates more than anything. So with this in mind the question then turns to why Watch_Dogs shipped on PC in the state it did and who was ultimately responsible for removing the features that had so many in love with the E3 demo.
The conspiracy theorist in me wants to join the chorus of people saying that Watch_Dogs was intentionally crippled on PC in order to make it look more comparable to its console brethren. Whilst I can’t deny that it’s a possibility I simply have no evidence apart from the features being in the game files themselves. This is where Ubisoft’s response to the controversy would shed some light on the issue as whilst they’re not likely to say “Yep, we did it because Watch_Dogs looks horrendous on consoles when compared to PC” they might at least give us some insight into why these particular features were disabled. Unfortunately they’re still keeping their lips sealed on this one so unfortunately all we have to go on now is rampant speculation, something I’m not entirely comfortable with engaging in.
Regardless of the reasons though it does feel a bit disingenuous to be shown one product and then be sold another. Most of the traditional reasons for disabled features, like performance or stability issues, just don’t seem to be present with this mod, which lends credence to the idea that they were disabled on purpose after they were fully developed. Until Ubisoft starts talking about this though we don’t have much more to go on and since this can be enabled so easily I don’t think many gamers are going to care too much what they have to say anyway. Still I’d very much like to know the story behind it as looks a lot more like a political/financial issue rather than a purely technical one.
Whilst computing has evolved exponentially in terms of capabilities and raw computing performance the underlying architecture that drives it has remained largely the same for the past 30 years. The vast majority of platforms are either x86 or some other CISC variant running on a silicon wafer that’s been lithographed to have the millions (and sometimes billions) of transistors etched into it. This is then all connected up to various other components and storage through the various bus definitions, most of which have changed dramatically in the face of new requirements. There’s nothing particularly wrong with this model, it’s served us well and has fallen within the bounds of Moore’s Law for quite some time, however there’s always the nagging question of whether or not there’s another way to do things, perhaps one that will be much better than anything we’ve done before.
According HP their new concept, The Machine, is the answer to that question.
For those who haven’t yet read about it (or watched the introductory video on the technology) HP’s The Machine is set to be the next step in computing, taking the most recent advances in computer technology and using them to completely rethink what constitutes a computer. In short there are 3 main components that make it up, 2 of which are based on technology that have yet to see a commercial application. The first appears to be a Sony Cell like approach to computing cores, essentially combining numerous smaller cores into one big computing pool which can then be activated at will, technology which currently powers their Moonshot range of servers. The second piece is optical interconnects, something which has long been discussed as the next stage in computing but as of yet hasn’t really made in roads at the level HP is talking about. Finally the idea of “universal memory” which is essentially memristor storage which HP Labs has been teasing for some time but has failed to bring any product to light.
As an idea The Machine is pretty incredible, taking the best of breed technology for every subsystem of the traditional computer and putting it all together in the one place. HP is taking the right approach with it too as whilst The Machine might share some common ancestry with regular computers (I’m sure the “special purpose cores” are likely to be x86) current operating systems make a whole bunch of assumptions that won’t be compatible with its architecture. Thankfully they’ll be open sourcing Machine OS which means that it won’t be long before other vendors will be able to support it. It would be all too easy for them to create another HP-UX, a great piece of software in its own right that no one wants to touch because it’s just too damn niche to bother with. That being said however the journey between this concept and reality is a long one, fraught with the very real possibility of it never happening.
You see whilst all of these technologies that make up The Machine might be real in one sense or another 2 of them have yet to see a commercial release. The memristor based storage was “a couple years away” after the original announcement by HP however here we are, some 6 years later, and not even a prototype device has managed to rear its head. Indeed HP said last year that we might see memristor drives in 2018 if we’re lucky and the roadmap shown in the concept video shows the first DIMMs appearing sometime in 2016. Similar things can be said for optical interconnects as whilst they’ve existed at the large scale for some time (fibre interconnects for storage are fairly common) they have yet to be created for the low level type of interconnects that The Machine would require. HP’s roadmap to getting this technology to market is much less clear, something which HP will need to get right if they don’t want the whole concept to fall apart at the seams.
Honestly my scepticism comes from a history of being disappointed by concepts like this with many things promising the world in terms of computing and almost always failing to deliver on them. Even some of the technology contained within The Machine has already managed to disappoint me with memristor storage remaining vaporware despite numerous publications saying it was mere years away from commercial release. This is one of those times that I’d love to be proven wrong though as nothing would make me happier than to see a true revolution in the way we do computing, one that would hopefully enable us to do so much more. Until I see real pieces of hardware from HP however I’ll remain sceptical, lest I get my feelings hurt once again.
Back in the days when ICQ was the default messaging platform for us teenagers I can remember becoming rather familiar with all manner of chatbots that’d grace my presence. Most of the time they were programmed to get you to go to a website, sometimes legitimate although almost always some kind of scam, but every so often you’d get one that just seemed to be an experiment to see how real they could make one. It wouldn’t take long to figure out if there was a real person on the end or not though as their variety of responses were limited and they would often answer questions with more questions, a telltale sign of an expert system. Since those heydays my contact with chatbots has been limited mostly to those examples that have done well in Turing Test competitions around the world. Even those however have proved to be less than stellar, showing that this field still has a long way to go.
However news has been making the rounds that a plucky little chatbot named Eugene Goostman has passed the Turing Test for the first time. Now the definition of the Turing Test itself is somewhat nebulous, being only that a human judge isn’t able to tell the difference between computer generated responses from that of a human, and if you take that literally it’s already been passed several times over. Many, including myself, take it to mean a little more in that a chatbot would have to be able to fool the majority of people into thinking it was human before it could be accepted as having passed the test. In that regard Eugene here hasn’t really passed the test at all, although I do admit it’s creator’s strategy was a good attempt at poking holes in the test’s vague definition.
You see Eugene isn’t your typical generic chatbot, instead he’s actually been programmed to be a 13 year old Ukrainian boy to whom English is a second language. It’s clever because you can then limit the problem space of what he can answer significantly as you wouldn’t expect a 13 year old to know a lot of things and when you’re questioning a non-native speaker the verbal tools you have available to you are again limited. At the same time however this is simply an artificial way of making the chatbot seem more human than it actually is. Indeed this is probably the biggest criticism that has been levelled at Eugene since its rise to fame as it appeared to dodge more responses than it could give answers to, a telltale sign that you’re speaking to an AI.
So as you can probably tell by the tone of my writing I don’t think that Eugene qualifies as having passed the Turing Test as the criteria used (33% of the judges were fooled) weren’t sufficient as otherwise several other bots would have claimed that title previously. I wholly admit this is due in part to the nebulous nature of how Turing first posited the test, whereby the interpretation of “passed” varies wildly between individuals, but my sentiment does seem to echo with the wider AI community. I think the ideas behind generating the Eugene chatbot are interesting as it shows how the problem space can be narrowed down but if the chance of it fooling someone is less than random then, in my mind, that does not qualify as a pass.
I don’t expect that the Turing Test will be past to a majority of the AI community’s satisfaction for some time to come as it requires duplicating so many human functions that we just haven’t been able to translate into code yet. For me the easiest way to tell a bot from a human is to teach it something esoteric and have it repeat its own interpretation back to me, something which no chatbot has been able to do to date. Indeed just that simple example, being able to teach it something and have it interpret it based on its own knowledge base, entails leaps forward in AI that just don’t exist in a general form yet. I’m not saying that it will never happen, far from it, but the system that first truly passes the Turing Test is yet to come and is likely many years away from reality.
I remember attending an exhibition about Leonardo Da Vinci a couple years ago and I was astounded by the complexity of some of the machines he created. It wasn’t just that he’d figured out these things where no one else had, more it was some of the things that he designed didn’t seem possible to me, at least with the technology he had available to him at the time. Ever since then I’ve had something of a fascination with mechanical structures, marvelling at creations that seem like they should be impossible. My favourite example of this is Theo Jansen’s Strandbeests, a new form of life that he has been striving to create for the better part of 25 years.
All of his designs are essentially tensegrity structures (I.E. all parts of the structure are under constant tension) arranged in such a way that when an outside force, in this case the wind at a beach, acts on them they’re able to walk. His initial designs only functioned when the wind was blowing however further designs, many of which you can see in the video, are able to store wind energy and then use it later through some rather clever mechanical engineering. Unfortunately I couldn’t find the best video which has Theo explaining how they work as that one also shows another Strandbeest he created that would avoid walking itself into the ocean (something which I’m still not sure I understand how it works completely).
The idea of creating a new form of life, even if it doesn’t meet the 7 rules for biological life, is a pretty exciting idea and one that’s found an unlikely form of replication: 3D printing. After many people made their own versions of his Strandbeests (I even printed a simple one off, although it broke multiple times during assembly) Theo has made the designs available through Shapeways, essentially giving the Strandbeests a way to procreate. Sure it’s not as elegant as what us biological entities have but the idea does have a cool sci fi bent to it that tickles me in all the right places.
Taken to its logical extreme I guess a Reprap that printed Strandbeests that assembled other Repraps would be the ultimate end goal, although that’s both exciting and horrifying at the same time.
Last week I wrote a post about the Solar Roadways Indiegogo campaign that had been sweeping the media. In it I did a lot of back of the envelope math to come up with some figures that made them seem reasonable based on my assumptions which lead me to the conclusion that they looked feasible with the caveat that I was working with very little information. Still I did a decent amount of research into some of the various components to make sure I was in the same order of magnitude. You’d then think that the venerable Thunderf00t’s takedown video on this project would put me at odds with him but, for the most part, I agree with him although there were a couple of glaring oversights which I feel require some attention.
FIrst off let me start off with the stuff that I agree with. He’s completely correct in the assertion that the tile construction isn’t optimal for road usage and the issues that arise from it are non-trivial. The idea of using LEDs sounds great in principle but as he points out they’re nigh on invisible in broad daylight which would make the road appear unmarked, a worrying prospect.Transporting the energy generated by these panels will also be quite challenging as the current produced by your typical solar panel isn’t conducive to being put directly on the grid. The properties of the road also require further validation as whilst the demonstrations shown by Solar Roadways say they’re up to standard there’s little proof to back up these claims so far. Finally the idea of melting snow seemed plausible to me on first look but I had not run any numbers against that claim so I’d defer to Thunderf00t’s analysis on this one.
However his claims about the glass are off the mark in many cases. Firstly it’s completely possible to make clear glass from recycled colour glass, usually through the use of additives like erbium oxide or manganese oxide. I agree on his point that it’s unlikely that they have the facilities available to them to do this right now however it’s not out of the realm of possibility. Thunderf00t also makes the mistake of taking a single item price of a piece of tempered glass off eBay and then uses that to extrapolate to the astronomical cost for covering all of the roads in the USA with it. In fact tempered glass produced at volume is actually rather cheap, about $7.50 per square meter, when you check out some large scale manufacturers. This makes the cost look far more reasonable than the $20 trillion that was originally quoted.
The same thing can be said for the solar panels, PCBs, LEDs and microcontrollers that are underneath them. Solar panels can be had for the low low price of $0.53 per watt (a grand total of about $30 per panel) and RGB LEDs for about $0.08/each (could have 1000 in each panel for $80). Indeed the cost of the construction of the panels themselves are likely to not be that expensive, especially at volume, however the preparation for the surface and the conduit channel are likely to be more expensive than your traditional road. This is because you’d likely have to do the same amount of site prep work for both of them (you can’t just lay these tiles into dirt) and then the panels themselves would be an incidental cost on top.
Tempered glass is also a lot harder than your regular type of glass, something which Thunderf00t missed in his analysis. It’s true that regular glass has a Mohs hardness of around 5 but tempered glass can be up to 7 or higher, depending on the additives used. Traditional road surfaces have a very similar hardness to that of tempered glass meaning they’d stuffer no more wear than a traditional road surface would. Whether this would mean a degradation in optical quality, and therefore solar efficiency, over time is something I can’t really comment on but the argument of sand and other things wearing away the surface doesn’t really hold up.
All this being said though Thunderf00t hits on the big issues that Solar Roadways has to face in order for their idea to become a reality. Whilst I’m still erring on the side of it being possible I do admit that there are numerous gaps in our knowledge of the product, many of which could quickly lead to it being completely infeasible. Still there’s potential for this idea to work in many areas, like the vast highways throughout Australia, even if some of the more outlandish ideas like melting the snow on them might not work out. It will be interesting to see how Solar Roadways reacts to this as there are numerous questions which can’t go unanswered.
With an abundance of space and not much else the rural parts of Australia aren’t really the place where a kid has much to entertain themselves with. From the age of about 12 however my parents let us kids bash our way around the property in all manner of vehicles which has then fed into a lifelong obsession with cars. This has been in direct competition with my financially sensible side however as cars are a depreciating asset, one that no amount of money invested in them can ever recoup. However I still enjoy the act of driving itself, especially if it’s through some of Australia’s more picturesque landscapes. You’d think then that the idea of a self driving car would be abhorrent to a person like myself but in reality it’s anything but.
We’re fast approaching the time when cars that can drive themselves to and from any location are not only technically feasible, they’re a few short steps away from being a commercial reality. Google’s self driving car, whilst it has only left its home town a couple times, has demonstrated that it’s quite possible to arm a car with a bevy of sensors and have it react better than a human would in many situations. Indeed the accidents their car has been involved in have not been the fault of the software, but of the humans either controlling the self driving car or those ramming into the back of it. Whilst there’s still many regulatory hurdles to go before these things are seen en-masse on our roads it would seem like having them there would be a huge boon to everyone, especially those travelling as its passengers.
For me whilst driving isn’t an unpleasant experience it’s still a time where I’m unable to do anything else but drive the car. Now I’m not exactly your stereotypical workaholic (I will keep a standard hour day and attempt to automate most of my work instead) but having an extra hour or so a day where I can complete a few tasks, or even just catch up on interesting articles, would be pretty handy. Indeed this is the reason why I still fly most places when travelling for business, even when the flight from Canberra to the other capitals is below an hour total. It’s not me doing the driving which allows me to get things done rather than spending multiple hours watching the odometer.
There’s also those numerous times when neither the wife nor I feel like driving and we could simply hand over to the car for the trip. I can even imagine it reducing our need to have separate cars as I could simply have the car drop my wife off and return to me if I needed it. That’s a pretty huge benefit and one that’s well worth paying a bit of a premium for.
This would also have the unintentional benefit of making those times when I wanted to drive that much more enjoyable. Nothing takes the fun out of something that enjoy than being forced to do it all the time for another purpose, something which driving to work every day certainly did for me. If I was only driving when I wanted to however I feel that I’d enjoy it far more than I’d otherwise would. I think a lot of car enthusiasts will feel the same way as few drive their pride and joys to work every day, instead having a daily driver that they run on the cheap. Of course some will abhor the experience in its entirety but you get that with any kind of new technology.
For me this technology can not come quick enough as the benefits are huge with the only downside being the likely high cost of acquisition. I’ve only been speaking from a personal viewpoint here too as there’s far much more to be gained once self driving cars reach a decent level of penetration among the wider community.
That’s a blog post for another day, however.
The main substrate of our roads hasn’t changed much in the past 50 years. Most of our roads these days are asphalt concrete with some being plain old concrete with a coarse aggregate in them. For what we use them for this isn’t really an issue as the most modern cars can still perform just as well on all kinds of roads so the impetus to improve them is low. There have been numerous ideas put forth to take advantage of the huge swaths of road we’ve laid down over the years, many seeking to use the heat they absorb to do something useful. One idea though would be a radical departure from the way we currently construct roads and it could prove to be a great source of renewable energy.
Solar (Freakin’) Roadways are solar tiles that can be laid down in place of regular road. Their surface is tempered glass that’s durable enough for a tractor to trundle over it and provides the same amount of grip that a traditional asphalt surface does. Underneath that surface is a bunch of solar panels that will generate electricity during the day. The hexagonal panels also include an array of LEDs which can then be used to generate lane markers, traffic signs or even alert drivers to hazards that have been detected up the road. Both the concept art and the current prototypes they have developed look extremely cool and with their Indiegogo campaign already being fully funded it’s almost a sure bet that we’ll see roads paved with these in the future.
The first question that comes to everyone’s mind though is just how much will roads paved in this way cost, and how does that compare to traditional roads?
As it turns out finding solid numbers on the cost of road construction per kilometer is a little difficult as the numbers seem to differ wildly depending on who you ask. A study that took data from several countries states that the median cost is somewhere around $960,000/km (I assume that’s USD) whereas councils from Australia have prices ranging from $600,000/km to $1,159,000/km. Indeed depending on how complicated the road is the costs can escalate quickly with Melbourne’s Eastlink road costing somewhere on the order of $34,000,000 per kilometer laid down. In terms of feasibility for Solar Roadways I’d say that they could be competitive with traditional roads if they could get their costs to around $1,000,000/km at scale production something which, in my mind, seems achievable.
Unfortunately Solar Roadways isn’t forthcoming with costs as of yet mostly due to them being in the prototype stage. Taking a look over the various components they list though I believe the majority of the construction cost will come from the channels beneath the panels as bulk prices for things like solar panels, tempered glass and PCBs are quite low. Digging and concreting the channels required to carry the power infrastructure could easily end up costing as much as a traditional road does so potentially we’re looking at a slightly higher cost per km than our current roads. Of course I could be horribly wrong about this since I’m no civil engineer.
The cost would be somewhat offset by the power that the solar roads would generate although the payback period is likely to be quite long. Their current prototypes are 36 watt panels which they claim will go up to 52 watt for the final production module. I can’t find any measurements for their panels so I’ve eyeballed that they’re roughly 30cm per side giving them a size of about 0.2 square meters. This means that a square meter of these things could generate roughly 250 watts at peak efficiency. The output will vary considerably throughout the year but say you get 7 hours per day at 50% max output you’re looking at about 875 watts generated per square meter. Your average road is about 3 meters wide giving us 3000 square meters of generation area generating about 2,600kwh per day. The current feed in tariffs in Australia would have 1km of Solar Roadways road making about $1000 / day giving a pay off time of around 3 years. My numbers are likely horribly skewed to be larger than they’d be realistically though (there are many more factors that come into play) but even slashing the efficiency down to 10% still gives you a pay back time of 15 years, longer than the current expected life of the panels.
As an armchair observer then it does seem like Solar Roadways’ idea is feasible and could end up being a net revenue generator for those who choose to adopt it. All of my numbers are based on my speculation though so there are numerous things that could put the kibosh on it but it’s at least taking to the real world implementation stage to see how things pan out. Indeed should this work as advertised then the future of transportation could be radically different, maybe enough to curb our impact on the global ecosystem. I’m looking forward to see more from Solar Roadways as a future with them looks to be incredibly exciting.
The Surface has always been something of a bastard child for Microsoft. They were somewhat forced into creating a tablet device as everyone saw them losing to Apple in this space (even though Microsoft’s consumer electronics division isn’t one of their main profit centers) and their entry into the market managed to confuse a lot of people. The split between the Pro and RT line was clear enough for those of us in the know however consumers, who often in the face of 2 seemingly identical choices will prefer the cheaper one, were left with devices that didn’t function exactly as they expected. The branding of the Surface then changed slightly so that those seeking the device would likely end up with the Pro model and all would be right with the world. The Surface 3, announced last week, carries on that tradition albeit with a much more extreme approach.
As you’d expect the new Surface is an evolutionary step up in terms of functionality, specifications and, funnily enough, size. You now have the choice of either an Intel i3, i5 or i7, 4GB or 8GB of memory and up to 512GB of SSD storage. The screen has swelled to 12″ in size and now sports a pretty incredible 2160 x 1440 resolution, equal to that of many high end screens you’d typically find on a desktop. These additional features actually come with a reduction in weight from the Surface 2 Pro, down from 900g to a paltry 790g. There are some other minor changes as well like the multi-position kickstand and a changed pen but those are small potatoes compared to the rest of the changes that seem to have aimed the Surface more as a laptop replacement than a tablet that can do laptop things.
Since I carry a laptop with me for work (a Dell Latitude E6430 if you were wondering) I’m most certainly sensitive to the issues that plague people like me and the Surface Pro has the answer to many of them. Having to lug my work beast around isn’t the most pleasant experience and I’ve long been a champion of moving everyone across to Ultrabooks in order to address many of the concerns. The Surface Pro is essentially an Ultrabook in a tablet form factor which provides the benefits of both in one package. Indeed colleagues of mine who’ve bought a surface for that purpose love them and those who bought the original Surface Pro back at the TechEd fire sale all said similar things after a couple days of use.
The one thing that would seal the deal for me on the Surface as the replacement to my now 2 year old Zenbook would be the inclusion (or at least option to include) a discrete graphics card. Whilst I don’t do it often I do use my (non-work) laptop for gaming and whilst the Intel HD 4400 can play some games decently the majority of them will struggle. However the inclusion of even a basic discrete chip would make the Surface a portable gaming powerhouse and would be the prime choice for when my Zenbook reaches retirement. That’s still a year or two away however so Microsoft may end up getting my money in the end.
What’s really interesting about this announcement is the profound lack of a RT version of the Surface Pro 3. Indeed whilst I didn’t think there was anything to get confused about between the two version it seems a lot of people did and that has led to a lot of disappointed customers. It was obvious that Microsoft was downplaying the RT version when the second one was announced last year but few thought that it would lead to Microsoft outright cancelling the line. Indeed the lack of an accompanying Surface RT would indicate that Microsoft isn’t so keen on that platform, something which doesn’t bode well for the few OEMs that decided to play in that space. On the flip side it could be a great in for them as Microsoft eating up the low end of the market was always going to be a sore spot for their OEMs and Microsoft still seems committed to the idea from a purely technological point of view.
The Surface 3 might not be seeing me pull out the wallet just yet but there’s enough to like about it that I can see many IT departments turning towards it as the platform of choice for their mobile environments. The lack of an RT variant could be construed as Microsoft giving up on the RT idea but I think it’s probably more to do with the confusion around each of the platform’s value propositions. Regardless it seems that Microsoft is committed to the Surface Pro platform, something which was heavily in doubt just under a year ago. It might not be the commercial success that the iPad et al were but it seems the Surface Pro will become a decent revenue generator for Microsoft.
Microsoft’s message last year was pretty clear: we’re betting big that you’ll be using Azure as part of your environment and we’ve got a bunch of tools to make that happen. For someone who has cloudy aspirations this was incredibly exciting even though I was pretty sure that my main client , the Australian government, would likely abstain from using any of them for a long time. This year’s TechEd seemed like it was a little more subdued than last year (the lack of a bond style entrance with its accompanying Aston Martin was the first indicator of that) with the heavy focus on cloud remaining, albeit with a bent towards the mobile world.
Probably the biggest new feature to come to Azure is ExpressRoute, a service which allows you to connect directly to the Azure cloud without having to go over the Internet. For companies that have regulations around their data and the networks it can traverse this gives them the opportunity to use cloud services whilst still maintaining their obligations. For someone like me who primarily works with government this is a godsend and once the Azure instance comes online in Australia I’ll finally be able to sell it as a viable solution for many of their services. It will still take them some time to warm to the idea but with a heavy focus on finding savings, something Azure can definitely provide, I’m sure the adoption rate will be a lot faster than it has been with previous innovations of this nature.
The benefits of Azure Files on the other hand are less clear as whilst I can understand the marketing proposition it’s not that hard to set up a file server within Azure. This is made somewhat more pertinent by the fact that it uses SMB 2.1 rather than Server 2012′s SMB 3.0 so whilst you get some good features in the form of a REST API and all the backing behind Azure’s other forms of storage it lacks many of the new base capabilities that a traditional file server has. Still Microsoft isn’t one to develop a feature unless they know there’s a market for it, so I’d have to guess that this is a feature that many customers have been begging for.
In a similar vein the improvements to Microsoft’s BYOD offerings appear to be incremental more than anything with InTune receiving some updates and the introduction of Azure RemoteApps. Of the two Azure RemoteApps would be the most interesting as it allows you to deliver apps from the Azure cloud to your end points, wherever they may be. For large, disparate organisations this will be great as you can leverage Azure to deploy to any of your officers, negating the need for heavy infrastructure in order to provide a good user experience. There’s also the opportunity for Microsoft to offer pre-packaged applications (which they’re currently doing with Office 2013) although that’s somewhat at odds with their latest push for Office365.
Notably absent from any of the announcements was Windows 8.2 or Server 2012 R3, something which I think many of us had expected to hear rumblings about. There’s still the chance it will get announced at TechEd Australia this year especially considering the leaked builds that have been doing the rounds. If they don’t it’d be a slight departure from the tempo they set last year, something which I’m not entirely sure is a good or bad move from them.
Overall this feels like incremental improvements to Microsoft strategy they were championing last year more than revolutionary change. That’s not a bad thing really as the enterprise market is still catching up with Microsoft’s new found rapid pace and likely won’t be on par with them for a few years yet. Still it begs the question as to whether or not Microsoft is really committed to the rapid refresh program they kicked off not too long ago. TechEd Australia has played host to some big launches in the past so seeing Windows 8.2 for the first time there isn’t out of the question. As for us IT folk the message seems to remain the same: get on the cloud soon, and make sure it’s Azure.