Establishing lunar colonies seems like the next logical step, it’s our closest celestial body after all, however it might surprise you to learn that doing that might in fact be a lot harder than establishing a similarly sized colony on Venus or Mars. Without an atmosphere to speak of our Moon’s surface is an incredibly harsh place with the full brunt of our sun’s radiation bearing down on it. That’s only half the problem too as since the day/night cycles last 2 weeks you’ll spend half your time in perpetual darkness at temperatures fast approaching absolute zero. There are ways around it however and recent research has led to some rather interesting prospects.
Whilst the surface of the Moon might be unforgiving going just a little bit below the surface negates many of the more undesirable aspects. Drilling into the surface is one option however that’s incredibly resource intensive, especially when you consider that all the gear required to do said drilling would need to be sent from Earth. The alternative is to use structures that are already present on the Moon such as caverns and other natural structures. We know that these kinds of formations are already present on the Moon thanks to the high resolution imagery and gravity mapping we’ve done (the Moon’s gravity field is surprisingly non-uniform) but just how big these structures could be has remained somewhat of a mystery.
Researchers at Purdue university decided to investigate just how big structures like these could be, specifically looking at how big lava tubes could get if they existed on the Moon. During its formation, which would have happened when a large object collided with the then primordial Earth, the surface of the Moon would have been ablaze with volcanic activity. However due to its much smaller size that activity has long since ceased but it would have still left behind the tell tale structures of its more tumultuous history. The researchers then modelled how big these tubes could have gotten given the conditions present on the Moon and came up with a rather intriguing discovery: they’d be huge.
When you see the outcome of the research it feels like an obvious conclusion, of course they’d be bigger since there’s less gravity, but the fact that they’re an order of magnitude bigger than what we’d see on Earth is pretty astounding. The picture above gives you some sense of scale for these potential structures, able to fit several entire cities within them with an incredible amount of room to spare. Whilst using such structures as a basis for a future lunar colony presents a whole host of challenges it does open up the possibility to the Moon having much more usable space than we first thought.
After spending a week deep in the bowels of Microsoft’s premier tech conference and writing about them breathlessly for Lifehacker Australia you’d be forgiven for thinking I’m something of a Microsoft shill. It’s true that I think the direction they’re going in for their infrastructure products is pretty spectacular and the excitement for those developments is genuine. However if you’ve been here for a while you’ll know that I’m also among their harshest critics, especially when they do something that drastically out of line with my expectations as one of their consumers. However I believe in giving credit where its due and a recent PA Report article has brought Microsoft’s credentials in one area into question when they honestly shouldn’t be.
The article I’m referring to is this one:
I’m worried that there are going to be a few million consoles trying to dial into the home servers on Christmas morning, about the time when a mass of people begin to download new games through Microsoft’s servers. Remember, every game will be available digitally day and date of the retail version, so you’re going to see a spike in the number of people who buy their Xbox One games online.
I’m worried about what happens when that new Halo or Call of Duty is released and the system is stressed well above normal operating conditions. If their system falls, no matter how good our Internet connections, we won’t be able to play games.
Taken at face value this appears to be a fair comment. We can all remember times when the Xbox Live service came down in a screaming heap, usually around christmas time or even when a large release happened. Indeed even doing a quick Google search reveals there’s been a couple of outages in recent memory although digging deeper into them reveals that it was usually part of routine maintenance and only affected small groups of people at a time. With all the other criticism that’s being levelled at Microsoft of late (most of which I believe is completely valid) it’s not unreasonable to question their ability to keep a service of this scale running.
However as the title of this post alludes to I don’t think that’s going to be an issue.
The picture shown above is from the Windows Azure Internals session by Mark Russinovich which I attended last week at TechEd North America. It details the current infrastructure that underpins the Windows Azure platform which powers all of Microsoft’s sites including the Xbox Live service. If you have a look at the rest of the slides from the presentation you’ll see how far that architecture has come since they first introduced it 5 years ago when the over-subscription rates were much, much higher for the entire Azure stack. What this meant was that when something big happened the network simply couldn’t handle it and caved under the pressure. With this current generation of the Azure infrastructure however it’s far less oversubscribed and has several orders of magnitude more servers behind it. With that in mind it’s far less likely that Microsoft will struggle to service large spikes like they have done in the past as the capacity they have on tap is just phenomenal.
Of course this doesn’t alleviate the issues with the always/often on DRM or the myriad of other issues that people are criticizing the XboxOne for but it should show you that worrying about Microsoft’s ability to run a reliable service shouldn’t be one of them. Of course I’m just approaching this from an infrastructure point of view and it’s entirely possible for the Xbox Live system to have some systemic issue that will cause it to fail no matter how much hardware they throw at it. I’m not too concerned about that however as Microsoft isn’t your run of the mill startup who’s just learning how to scale.
I guess we’ll just have to wait and see how right or wrong I am.
Want to feel really insignificant for a bit?
I don’t know what it is but things like the galaxy IC1101, VY Canis Majoris and all other heavenly bodies that are just beyond anything that I’m capable of imagining captivate me completely. I think it’s probably due to the possibilities that arise from such scale. Just think about it, if one planet in one lonely solar system was able to produce a species like us what kind of life could have formed in these other places. Could it even happen? Would we be able to recognise it if we saw it? The possibilities are nearly endless and that, for me at least, is wildly fascinating.
It’s that desire to find out what’s out there that fuels my passion for transhumanist ideals. Whilst many will argue that ageing and death are a natural part of life that should not be circumvented I instead ask why you want to limit your experience to one life time, especially when the universe is so vast as to provide nearly limitless opportunities for those who wish to explore it.
Some find that incomprehensible scale intimidating, I find it invigorating.
I don’t know why but the way brakes on cars, bikes, etc. has always puzzled me. For nearly all breaking systems in the world the main way they work is by converting your car’s kinetic energy (I.E. its movement) into heat through high friction pads attached to the wheel. This means that, for all practical purposes, the energy that went into creating said movement is unrecoverable and reduces the overall efficiency of the system. I figured that there had to be a better way to do it, one that would at least recover some of the energy lost in order to make all forms of transport more efficient.
Such a system became available with the first electric cars through a system called regenerative braking. The system comprises of a small generator that is attached to an axle or wheel hub that is engaged when braking is applied. This is then fed back into the battery, recharging it and extending the range of the vehicle. These systems are quite large however but I always envisioned some sort of system that could be scaled to fit transportation of any size, and someone has come up with it:
It’s really quite ingenious in its simplicity: braking spins the flywheel which functions as a kind of mechanical battery which can then be used on demand. Of course for a retail system you’d probably want to encase the flywheel in something, for both safety and efficiency purposes, as whilst flywheels are usually safe they can be rather destructive should anything mess with them. Still such a system could be easily scaled up, down or horizontally (use several in one vehicle) to suit almost any application.
There are some issues of course, ones that became painfully apparently back when several countries experimented with a scaled up version of this in the form of the Gyrobus. Granted these were using the flywheels as the sole power source so most of these issues are diminished at lower scales but all the concerns that applied to them still apply to the scaled down versions. Most of these can be overcome though and it will be interesting to see how the idea develops from here.
This is just another example of innovations that should be everywhere. The idea is so simple that it makes me wonder what’s stopping companies from pursuing this idea themselves, like there’s something that I’m not aware of. I’m sure the safety aspect plays a big role but a properly designed and secured flywheel is no more dangerous than a battery of similar size. I’m sure that videos like the one above will inspire companies to look into the idea more closely and hopefully start producing vehicles that are far more efficient than the ones they produce today.
Ever since I’ve been able to get broadband Internet I’ve only had the one provider: Internode. Initially it was just because my house mate wanted to go with them, but having zero experience in the area I decided to go along with him. I think the choice was partially due to his home town being Adelaide, but Internode also had a reputation for being a great ISP for geeks and gamers like us. Fast forward 6 years and you can still find me on an Internode plan simply because the value add services they provide are simply second to none. Whilst others may be cheaper overall none can hold a candle to all the extra value that Internode provides, which I most heartily indulge in.
In Internode’s long history it’s made a point about being one of the largest privately owned Internet service providers (ISPs) in Australia. This is no small feat as the amount of capital required to become an ISP, even in Australia, is no small feat. Internode’s reputation however afforded it the luxury of many geeks like myself chomping at the bit to get their services in our area, guaranteeing them a decent subscriber base wherever there was even a slight concentration of people passionate about IT and related fields. In all honestly I thought Internode would continue to be privately owned for a long time to come with the only possible change being them becoming publicly traded when they wanted to pursue more aggressive growth strategies.
Today brings news however that they will be bought out by none other than iiNet:
In a conference call this afternoon discussing the $105 million takeover announcement, Hackett said that because of NBN Co’s connectivity virtual circuit charge, and the decision to have 121 points of interconnect (POI) for the network, only an ISP of around 250,000 customers would have the scale to survive in an NBN world. With 260,000 active services, Internode just makes the cut. He said the merger was a matter of survival.
“The size of Internode on its own is right on the bottom edge of what we’ve considered viable to be an NBN player. If you’re smaller than that, the economics don’t stack up. It would be a dangerous thing for us to enter the next era being only just quite big enough,” he said.
Honestly when I first heard the news I had some very mixed feelings about what it would entail. iiNet, whilst being a damn fine provider in their own right, isn’t Internode and their value add services still lag behind those offered by Internode. However if I was unable to get Internode in my chosen area they would be the second ISP that I would consider going for, having numerous friends who have done so. I figured that I’d reserve my judgement until I could do some more research on the issue and as it turns out I, and all of Internode’s customers, really have nothing to worry about.
Internode as it stands right now will continue on as it does but will be wholly owned by iiNet. This means that they can continue to leverage their brand identity (including their slightly premium priced value add business model) whilst gaining the benefit of the large infrastructure that iiNet has to offer. The deal then seems to be quite advantageous for both Internode and iiNet especially with them both looking towards a NBN future.
That leads onto another interesting point that’s come out of this announcement: Internode didn’t believe it couldn’t economically provide NBN services at their current level of scale. That’s a little scary when one of the largest independent ISPs (with about 3% market capture if I’m reading this right) doesn’t believe the NBN is a viable business model for them. Whilst they’ll now be able to provide such services thanks to the larger user base from iiNet it does signal that nearly all smaller ISPs are going to struggle to provide NBN services into the future. I don’t imagine we’ll end up in a price fixing oligopoly but it does seem to signal the beginning of the end for those who can’t provide a NBN connection.
Overall the acquisition looks like a decisive one for iiNet and the future is now looking quite bright for Internode and all its customers. Hopefully this will mean the same or better services delivered at a lower price thanks to iiNet’s economies of scale and will make Internode’s NBN plans look a lot more comepetitive than they currently are. Should iiNet want to make any fundamental changes to Internode they’re going to have to do that softly as there’s legions of keyboard warriors (including myself) that could unleash hell if they felt they’ve been wronged. I doubt it will come to that though but there are definitely going to be a lot of eyes on the new iiNet/Internode from now on.
Maybe I’m just hanging around the wrong places on the Internet but recently there seemed to be a higher than average level of vitriol being launched at Microsoft. From my totally arbitrary standpoint it seems that most people don’t view Microsoft as the evil empire that they used to and instead now focus on the two new giants in the tech center, Apple and Google. This could be easily explained by the fact that Microsoft hasn’t really done anything particularly evil recently whilst Apple and Google have both been dealing with their ongoing controversies of platform lock-down and privacy related matters respectively. Still no less than two articles have crossed my path of late that squarely blame Microsoft for various problems and I feel they warrant a response.
The first comes courtesy of the slowly failing MySpace who has been bleeding users for almost 2 years straight now. Whilst there are numerous reasons as to why they’re failing (with Facebook being the most likely) one blog asked the question if their choice of infrastructure was to blame:
1. Their bet on Microsoft technology doomed them for a variety of reasons.
2. Their bet on Los Angeles accentuated the problems with betting on Microsoft.
Let me explain.
The problem was, as Myspace started losing to Facebook, they knew they needed to make major changes. But they didn’t have the programming talent to really make huge changes and the infrastructure they bet on made it both tougher to change, because it isn’t set up to do the scale of 100 million users it needed to, and tougher to hire really great entrepreneurial programmers who could rebuild the site to do interesting stuff.
I won’t argue point 2 as the short time I spent in Los Angeles showed me that it wasn’t exactly the best place for acquiring technical talent (although I haven’t been to San Francisco to give it a good comparison, but talking with friends who have seems to confirm this). However betting on Microsoft technology is definitely not the reason why MySpace started on a long downward spiral several years ago, as several commenters point out in this article. Indeed MySpace’s lack of innovation appears to stem from the fact that they outsourced much of their core development work to Telligent, a company that provides social network platforms. The issue with such an arrangement meant that they were wholly dependent on Telligent to provide updates to the platform they were using, rather than owning it entirely in house. Indeed as a few other commenters pointed out the switch to the Microsoft stack actually allowed MySpace to Scale much further with less infrastructure than they did previously. If there was a problem with scaling it definitely wasn’t coming from the Microsoft technology stack.
When I first started developing what became Lobaco scalability was always something that was nagging at the back of my head, taunting me that my choice of platform was doomed to failure. Indeed there have been only a few start-ups that have managed to make it big using the Microsoft technology stack so it would seem like the going down this path is a sure fire way to kill any good idea in its infancy. Still I have a heavy investment in the Microsoft line of products so I kept on plugging away with it. Problems of scale appear to be unique for each technology stack with all of them having their pros and cons. Realistically every company with large numbers of users has their own unique way of dealing with it and the technology used seems to be secondary to good architecture and planning.
Still there’s still a strong anti-Microsoft sentiment amongst those in Silicone Valley. Just for kicks I’ve been thumbing through the job listings for various start ups in the area, toying with the idea of moving there to get some real world start-up experience. Most commonly however none of them want to hear anything about a Microsoft based developer, instead preferring something like PHP/Rails/Node.js. Indeed some have gone as far as to say that .NET development is black mark against you, only serving to limit your job prospects:
Programming with .NET is like cooking in a McDonalds kitchen. It is full of amazing tools that automate absolutely everything. Just press the right button and follow the beeping lights, and you can churn out flawless 1.6 oz burgers faster than anybody else on the planet.
However, if you need to make a 1.7 oz burger, you simply can’t. There’s no button for it. The patties are pre-formed in the wrong size. They start out frozen so they can’t be smushed up and reformed, and the thawing machine is so tightly integrated with the cooking machine that there’s no way to intercept it between the two. A McDonalds kitchen makes exactly what’s on the McDonalds menu — and does so in an absolutely foolproof fashion. But it can’t go off the menu, and any attempt to bend the machine to your will just breaks it such that it needs to be sent back to the factory for repairs.
I should probably point out that I don’t disagree with some of the points of his post, most notably how Microsoft makes everything quite easy for you if you’re following a particular pattern. The trouble comes when you try to work outside the box and many programmers will simply not attempt anything that isn’t already solved by Microsoft. Heck I encountered that very problem when I tried to wrangle their Domain Services API to send and receive JSON a supported but wholly undocumented part of their API. I got it working in the end but I could easily see many .NET developers simply saying it couldn’t be done, at least not in the way I was going for it.
Still that doesn’t mean all .NET developers are simple button pushers, totally incapable of thinking outside the Microsoft box. Sure there will be more of those type of programmers simply because .NET is used is so many places (just not Internet start-ups by the looks of it) but to paint all of those who use the technology with the same brush seems pretty far fetched. Heck if he was right then there would’ve been no way for me to get my head around Objective-C since it’s not supported by Visual Studio. Still I managed to get competent in 2 weeks and can now hack my way around in Xcode just fine, despite my extensive .NET heritage.
It’s always the person or company, not the technology, that limits their potential. Sure you may hit a wall with a particular language or infrastructure stack but if you’re people are capable you’ll find a way around it. I might be in the minority when it comes to trying to start a company based around Microsoft technology but the fact is that attempting to relearn another technology stack is a huge opportunity cost. If I do it right however it should be flexible enough so that I can replace parts of the system with more appropriate technologies down the line, if the need calls for it. People pointing the finger at Microsoft for all their woes are simply looking for a scapegoat so they don’t have to address the larger systemic issues or are simply looking for some juicy blog fodder.
I guess they found the latter, since I certainly did