When you read news about fusion it’s likely to be about a tokamak type reactor. These large doughnut shaped devices have dominated fusion research for the past 3 decades mostly because of their relative ease of construction when compared to other designs. That’s not to say they’re not without their drawbacks, as the much delayed ITER project can attest to, however we owe much of the recent progress in this field to the tokamak design. However there are other contenders that, if they manage to perform at similar levels to tokamaks, could take over as the default design for future fusion reactors. One such design is called the stellarator and its latest incarnation could be the first reactor to achieve the dream: steady state fusion.
Compared to a tokamak, which has an uniform shape, the stellarator’s containment vessel appears buckled and twisted. This is because of the fundamental design difference between the two reactor types. You see in order to contain the hot plasma, which reaches temperatures of 100 million degrees celsius, fusion reactors need to contain it with a magnetic field. Typically there are two types of fields, one that provides the pinch or compressing effect (poloidal field) and another field to keep the plasma from wobbling about and hitting the containment vessel (toroidal field). In a tokamak the poloidal field comes from within the plasma itself by running a large current through the plasma and the poloidal field from the large magnets that run the length of the vessel. A stellarator however provides both the toroidal and poloidal fields externally requiring no plasma current but necessitating a wild magnet and vessel design (pictured above). Those requirements are what have hindered stellarator design for some time however with the advent of computer aided design and construction they’re starting to become feasible.
The Wendelstein 7-X, the successor to the 7-AS, is a stellarator that’s been a long time in the making, originally scheduled to have been fully constructed by 2006. However due to the complexity and precision required of the stellarator design, which was only completed with the aid of supercomputer simulations, construction only completed last year. The device itself is a marvel of modern engineering with the vast majority of the construction being completed by robots, totalling some 1.1 million hours. The last year has seen it pass several critical validation tests, including containment vessel pressure tests and magnetic field verification. Where it really gets interesting though is where their future plans lead; to steady state power generation.
The initial experiment will be focused on short duration plasmas with the current microwave generators able to produce 10MW in 10 second bursts or 1MW for 50 seconds. This is dubbed Operational Phase 1 and will serve to validate the stellarator’s design and operating parameters. Then, after the completion of some additional construction work to include a water cooling transfer system, Operational Phase 2 will begin which will allow the microwave system to operate in a true steady state configuration, up to 30 minutes. Should Wendelstein 7-X be able to accomplish this it will be a tremendous leap forward for fusion research and could very well pave the way for the first generation of commercial reactors based on this design.
Of course we’re still a long way away from reaching that goal but this, coupled with the work being done at ITER, means that we’re far closer than we ever were to achieving the fusion dream. It might still be another 20 years away, as it always is, but never before have we had so many reactor designs in play at the scales we have today. We’ll soon have two (hopefully) validated designs done at scale that can achieve steady state plasma operations. Then it simply becomes a matter of economics and engineering, problems that are far easier to overcome. No matter how you look at it the clean, near limitless energy future we’ve long dream of is fast approaching us and that should give us all great hope for the future.
The way we get most of the scientific data back from the rovers we currently have on Mars is through an indirect method. Currently there are four probes orbiting Mars (Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter and MAVEN) all of which contain communications relays, able to receive data from the rovers and then retransmit it back to Earth. This has significant advantages, mostly being that the orbiters have longer periods with which to communicate with Earth. Whilst all the rovers have their own direct connections back to Earth they’re quite limited, usually several orders of magnitude slower. Whilst current rovers won’t have their communication links improved for future missions having a better direct to Earth link could prove valuable, something which researchers at the University of California, Los Angeles (UCLA) have started to develop.
The design is an interesting one essentially being a flat panel of phased antenna array elements using a novel construction. The reasoning behind the design was that future Mars rover missions, specifically looking towards the Mars 2020 mission, would have constraints around how big of an antenna it could carry. Taking this into account, along with the other constraint that NASA typically uses X-band for deep space communications like this, the researchers came up with the design to maximise the gain of the antenna. The result is this flat, phased array design which, when tested in a prototype 4 x 4 array, closely matched their simulated performance metrics.
With so many orbiters around Mars it might seem like a better direct to Earth communications relay wouldn’t be useful however there’s no guarantees that those relays will always be available. Currently mission support for most of those orbiters is slated to end in the near future with the furthest one out slated for decommissioning in 2024 (MAVEN). Since there’s a potential new rover slated to land sometime in 2020, and since we know how long these things can last once they’ve landed, having better on board communications might become crucial to the ongoing success of the mission. Indeed should any of the other rovers still be functioning at that time the new rover may have to take on board the relay responsibilities and that would demand a much better antenna design.
There’s still more research to be done with this particular prototype, namely scaling it up from its current 4 x 4 design to the ultimate 16 x 16 panel. Should the design prove to scale as expected then there’s every chance that you might see an antenna based on this design flying with an orbiter in the near future. I’m definitely keen to see how this progresses as, whilst it might have the singular goal of improving direct to Earth communications currently, the insights gleaned from this design could lead to better designs for all future deep space craft.
It’s sometimes hard to remember that smartphones are still a recent phenomenon with the first devices to be categorised as such being less than a decade old. Sure there were phones before that which you could say were smartphones but back then they were more an amalgam of a PDA and a phone more than a seamless blend between the two. Back then the landscape of handset providers was wildly different, one that was dominated by a single player: Nokia. Their failure to capitalize on the smartphone revolution is a testament to incumbents failing to react to innovative upstarts and their sale to Microsoft their admittance of their fault. You can then imagine my surprise when the now much smaller company is eyeing off a return to the smartphone market as pretty much everyone would agree the horse has long since bolted for Nokia.
The strategy is apparently being born out of the Nokia Technologies arm, the smallest branch out of the three that remained after the deal with Microsoft (the other two being its network devices and Here location division). This is the branch that holds Nokia’s 10,000 or so patents and so you’d think that they’d likely just be resting on their laurels and collecting patent fees for time immaterial. However this section has been somewhat busy at work having developed and licensed two products since the Microsoft deal. The first of which is z Launcher an Android launcher and the N1 a tablet which they’ve licensed out to another manufacturer whom they’ve also lent the Nokia brand name too. The expectation is that future Nokia devices will likely follow the latter’s model with Nokia doing most of the backend work but then offloading it to someone else to manufacture and ship.
There’s no doubt that Nokia had something of a cult following among Windows Phone users as they provided some of the best handsets for that platform. Their other smartphones however had no such following as their pursuit of their own mobile ecosystem made it extremely unappealing to developers who were already split between two major platforms. Had Nokia retained control of the Lumia brand I could see them having an inbuilt user base for a future smartphone, especially if came in an Android flavour, however that brand (and everything that backed it) went to Microsoft and so did all the loyalty that went with it. Nokia is essentially starting from scratch here and, unfortunately, that doesn’t bode well for the once king of the phone industry.
Coming in at that level you’re essentially competing with every other similarly specced handset out there and, to be honest, it’s a market that eats up competitors like that without too much hassle. The outsourcing of the actual manufacturing and distribution means that they don’t shoulder a lot of the risk that they used to with such designs however it also means they have little control over the final product that actually reaches consumers. That being said the N1 does look like a solid device but that doesn’t necessarily mean that future devices will share the same level of quality.
Nokia is going to have to do something to stand out from the pack and, frankly, without their brand loyalty behind them I’m struggling to see what they could do to claw back some of the market share they once had. There are innumerable companies now that have solid handset choices for nearly all sectors of the market and the Nokia brand name just doesn’t carry the weight it once did. If they’re seriously planning a return to the smartphone market they’re going to have to do much more than just make another handset, something which I’m not entirely sure the now slimmed down Nokia is capable of doing.
There’s no denying that the Space Shuttle was an unique design being the only spacecraft that was capable aerodynamic flight after reentry. That capability, initially born out of military requirements for one-orbit trips that required significant downrange flight, came at a high cost in both financial and complexity terms dashing any hopes it had of being the revolutionary gateway space it was intended to be. A lot of the designs and engineering were sound though and so it should come as little surprise to see elements of it popping up in other, more modern spacecraft designs. The most recent of those (to come to my attention at least) is the European Space Agency’s Intermediate eXperimental Vehicle, a curious little craft that could be Europe’s ticket to delivering much more than dry cargo to space.
Whilst this might not be an almost exact replica like the X-37B is it’s hard to deny that the IXV bears a lot of the characteristics that many of us associated with the Space Shuttle. The rounded nose, blackened bottom, white top and sleek profile are all very reminisicent of that iconic design but that’s where the similarities end. The IXV is a tiny little craft weighing not a lot more than your typical car and lacking the giant wings that allowed the Shuttle to fly so far. This doesn’t mean it isn’t capable of flight however as the entire craft is a lifting body, capable of generating lift comparable to a winged aircraft. Steering is accomplished 2 little paddles attached to the back enabling the IXV to keep its thermal protective layer facing the right direction upon reentry. For now the IXV is a completely robotic craft with little room to spare save for a few on board experiments.
Much like the X-37B the IXV is being designed as a test bed for the technologies that the ESA wants to use in upcoming craft for future missions. Primarily this relates to its lifting body profile and the little flaps it uses for attitude control, things which have a very sound theoretical basis but haven’t seen many real world applications. If all goes according to plan the IXV will be making its maiden flight in October this year, rocketing up to the same altitude as the International Space Station, nearly completing an orbit and then descending back down to earth. Whilst it’s design would make you think it’d then be landing at an air strip this model will actually end up in the Pacific ocean, using its aerodynamic capabilities to guide it to a smaller region than you could typically achieve otherwise. It also lacks any landing gear to speak of, relying instead on parachutes to cushion its final stages of descent.
Future craft based on the IXV platform won’t be your typical cargo carrying ISS ferries however as the ESA is looking to adapt the platform to be an orbital platform, much like the Shuttle was early on in its life. The downrange capability is something that a lot of space fairing nations currently lack with most relying on Russian craft or pinning their hopes on the capabilities of the up and coming private space industry. This opens up a lot of opportunities for scientists to conduct experiments that might be cost prohibitive to complete on the ISS or even ones that might be considered to be too dangerous. There doesn’t appear to be any intention to make an IXV variant that will carry humans into space however, although there’s already numerous lifting body craft in various stages of production that are aiming to have that capability.
It’s going to be interesting to see where the ESA takes the IXV platform as it definitely fills a niche that’s currently not serviced particularly well. Should they be able to transform the IXV from a prototype craft into a full production vehicle within 3 years that would be mightily impressive but I have the feeling that’s a best case scenario, something which is rare when designing new craft. Still it’s an interesting craft and I’m very excited to see what missions it will end up flying.
On paper the Space Shuttle was the signal of the new space age where access to the final frontier would be cheap and reliable, ushering in the next wave of human prosperity. It would do this through two innovative (at the time) ideas: make the craft reusable and reduce the turn around time on launches to a mere 2 weeks, enabling 26 flights per year at a drastically lower cost than any other launch system. Unfortunately due to the requirements placed on it by the numerous different agencies that had their hand in designing it the final incarnation could not meet the latter goal and thus failed to provide the cheap access to space that it dreamed of. Of course it also taught us a lot about spacecraft design most notably that giant space planes aren’t particularly efficient ways of getting payloads into orbit.
That doesn’t seem to stop people from designing more of them, however.
DARPA recently announced that it was seeking designs for a revolutionary space vehicle, dubbed the XS-1, with the intention of drastically lowering the cost per kg to orbit for small sized payloads (up to about 2,000KG). The design requirements are fairly open with the only stipulations being that the main craft is a reusable, hypersonic vehicle with the payload achieving the desired orbit using a traditional rocket. This means that whilst the potential craft detailed in the artist’s impression above is a good indicator of what the XS-1 hopes to achieve the actual craft could end up being radically different, especially if any of the other companies currently playing in this field having anything to do with it.
The main goal of this program is to drastically reduce the cost to orbit for smaller payloads, almost by an order of magnitude if you compare it to traditional launch systems. This, in turn, would lead to a lot of missions that were otherwise infeasible to become a reality and whilst the initial applications are more than likely to be military in nature I’m sure any private contractor would ensure a dual use agreement for the bulk of the technology. The crux of the XS-1, at least in my opinion, is whether or not this is achievable in the time frames that have set out for the project, considering that the first launch is scheduled for 2017.
Taking the rule of 6 into account (Mach 6 at 60,000 feet is 6% of the energy required for orbital velocity) a craft with such a flight profile would need to make several strong technological advances in order to be able to fly. The only engines capable of achieving speeds above that (at the required price) are scramjets and the fastest we’ve ever managed to get one to fly was Mach 5.1 last year. That means there’s still a long way to go to get sustained flight out of a hypersonic, air-breathing engine and it’s questionable that anyone would be able to achieve it in that time frame. Indeed even Lockheed Martin, who recently announced the hypersonic SR-72, doesn’t believe they’ll get a prototype flying before 2023.
I’m a fan of the idea, and indeed if anyone can pull it off I’ll be wildly impressed, however the technology to support it is still in its infancy with the cutting edge being far away from viability. There are other ways of tackling it of course but I can’t really see any of them being done for the price that DARPA is asking. Indeed the cheapest fully rocket solution goes to SpaceX but it’s still double the asking price for less payload than what DARPA requires. In any case the designs will hopefully show some ingenuity and, if we’re lucky, 2017 will bring us another baby brother to the retired Space Shuttle.
One of the first ideas that an engineer in training is introduced to is the idea of modularity. This is the concept that every problem, no matter how big, can be broken down into a subset of smaller problems that are interlinked. The idea behind this is that you can design solutions specific to the problem space rather than trying to solve everything in one fell swoop, something that is guaranteed to be error prone and likely never to achieve its goals. Right after you’re introduced to that idea you’re also told that modularity done for its own sake can lead to the exact same problems so its use must be tempered with moderation. It’s this latter point that I think the designers of Phonebloks might be missing out on even though as a concept I really like the idea.
For the uninitiated the idea is relatively simple: you buy yourself what equates to a motherboard which you can then plug various bits and pieces in to with one side being dedicated to a screen and the other dedicated to all the bits and pieces you’ve come to expect from a traditional smartphone. Essentially it’s taking the idea of being able to build your own PC and applying it to the smartphone market done in the hope of reducing electronic waste since you’ll only be upgrading parts of the phone rather than the whole device at a time. The lofty idea is that this will eventually become the platform for everyone and smartphone component makers will be lining up to build additional blocks for it.
As someone who’s been building his own PCs for the better part of 3 decades now I think the idea that the base board, and by extension the interconnects it has on it, will never change is probably the largest fundamental flaw with Phonebloks. I’ve built many PCs with the latest CPU socket on them in the hopes that I could upgrade on the cheap at a later date only to find that, when it came time to upgrade, another newer and far superior socket was available. Whilst the Phonebloks board can likely be made to accommodate current requirements its inevitable that further down the track some component will require more connections or a higher bandwidth interface necessitating its replacement. Then, just as with all those PCs I bought, this will also necessitate re-buying all the additional components, essentially getting us into the same position as we are currently.
This is not to mention the fact that hoping other manufacturers, ones that already have a strong presence in the smartphone industry, will build components for it is an endeavor that’s likely to be met with heavy resistance, if it’s not outright ignored. Whilst there are a couple companies that would be willing to sell various components (Sony with their EXMOR R sensor, ARM with their processor, etc.) they’re certainly not going to bother with the integration, something that would likely cost them much more than any profit they’d see from being on the platform.
Indeed I think that’s the biggest issue that this platform faces. Whilst its admirable that they’re seeking to be the standard modular platform for smartphones the standardization in the PC industry did not come about overnight and took the collaboration of multiple large corporations to achieve. Without their support I’m struggling to see how this platform can get the diversity it needs to become viable and as far as I can tell the only backing they’ve got is from a bunch of people willing to tweet on their behalf.
Fundamentally I like the idea as whilst I’m able to find a smartphone that suits the majority of my wants pretty easily there are always things I would like to trade in for others. My current Xperia Z would be a lot better if the speakerphone wasn’t rubbish and the battery was capable of charging wirelessly and I’d happily shuffle around some of the other components in order to get my device just right. However I’m also aware of the giant integration challenge that such a modular platform would present and whilst they might be able to get a massive burst of publicity I’m skeptical that it will turn into a viable product platform. I’d love to be wrong on this though but as someone who’s seen many decades of modular platform development and the tribulations it entails I can’t say that I’m banking money for my first Phoneblok device.
Ever since Elon Musk uttered the words Hyperloop in the middle of last year the tech world has been abuzz with speculation as to what it might actually be. Whilst it was known to be some kind of tube based transportation system the amount of specifics given out were incredibly slim which, of course, led to an incredible amount of hype over it. If anyone else had said something like this it would be easy to dismiss them but Musk, founder of SpaceX and Tesla, seems to have a knack for bringing seemingly crazy ideas to life. After a year of anticipation, teasing and rampant speculation Musk finally released the first iteration of his Hyperloop design and it’s quite impressive.
So it seems that the best speculators out there have got the design mostly right, it’s a low pressure tube system that could conceivably work both above and under ground and utilizes linear accelerators (I.E. railguns) to get them up to the required speed. The really interesting part of it however is the pod design as they’re what makes the whole system viable. You see in an a column of air like that contained within a hyperloop you’ll eventually end up pushing the entire column of air in front of you, not so great if you want to achieve high speeds. Hyperloop overcomes this by mounting an intake at the front that drives a compressor, effectively shunting all that air out of the way. At the same time the air being taken in is used to power the air bearings at the bottom of the craft.Additionally the pods get reboosted every 70 miles by linear actuators, reducing the power capacity required to power the compressors during travel.
What is quite impressive is the rather low power requirements for the passenger only version of the capsule needing only 100KW to keep it trucking along. That’s comparable to a typical 2 door hatchback engine which, as anyone who’s driven in one can attest to, struggles under the weight of 4+ passengers and cargo. However the combination of a low pressure environment, leading face intake and air bearings seems to be enough to reduce the total power requirements for staying at high subsonic levels dramatically. The variant with a vehicle compartment seems to up the power requirements dramatically however, requiring some 285KW to accomplish the same task.
The Hyperloop design also includes a whole bunch of other little innovations that make it quite appealing. Whilst it might be able to be done underground I can imagine doing so would be rather costly as digging tunnels is never cheap. However the above ground design looks like it could accomplish the same goal without requiring massive amounts of construction, even less than that of your typical highway. This is due to its monorail like construction utilizing pillars to elevate it above the ground. Such a system could then run along side established highways and any detours easily accommodated. The top surface of it could then have solar panels mounted on it providing the majority of the energy required to power the system making Hyperloop a very environmentally friendly transportation system.
Of course it’s still very much a theoretical system, albeit a thoroughly thought out it. Whilst I doubt it’ll end up replacing the high speed train link that Musk wants it to (even though he claims it would be cheaper and faster) once there’s a demonstration link up I can see people taking it very seriously. Heck we’ve been talking about high speed rail in Australia for decades and it’s always been killed because of the cost. Hyperloop could be the solution to that and we could finally get that MEL-CBR-SYD-BNE link everyone’s been wanting and not have the project go down in flames long before ground gets broken.
And yes I want that for almost entirely selfish reasons, flying to Sydney is almost not worth the effort 😉
It’s no secret that I’m a big fan of my Samsung Galaxy S2, mostly because the specifications are enough to make any geek weak at the knees. It’s not just geeks that are obsessed with the phone either as Samsung has moved an impressive 10 million of them in the 5 months that its been available. Samsung has made something of a name for itself in being the phone manufacturer to have if you’re looking for an Android handset, especially when you consider Google used their original Galaxy S as the basis for their flagship phone the Nexus S. Rumours have been circulating for a while that Samsung would once again be the manufacturer of choice, a surprising rumour considering they had just sunk a few billion into acquiring Motorola.
Yesterday however saw the announcement of Google’s new flagship phone the Galaxy Nexus and sure enough it’s Samsung hardware that’s under the hood.
The stand out feature of the Galaxy Nexus is the gigantic screen, coming in at an incredible 4.65 inches and a resolution of 1280 x 720 (the industry standard for 720p). That gives you a PPI of 315 which is slightly below the iPhone 4/4S’ retina screen which comes in at 326 PPI which is amazing when you consider it’s well over an inch bigger. As far as I can tell it’s the highest resolution on a smart phone in the market currently and there’s only a handful of handsets that boast a similar sized screen. Whether this monster of a screen will be a draw card though is up for debate as not all of us are blessed with the giant hands to take full advantage of it.
Under the hood it’s a bit of a strange beast, especially when compared to its predecessors. It uses a Texas Instruments OMAP 4460 processor (dual core, 1.2GHz) instead of the usual ARM A9 or Samsung’s own Exynos SOC coupled with a whopping 1GB of RAM. The accompanying hardware includes a 5MP camera capable of 1080p video, all the usual connectivity options with the addition of NFC and wireless N and, strangely enough, a barometer. The Galaxy Nexus does not feature expandable storage like most of its predecessors did, instead coming in 16GB and 32GB variants. All up it makes for a phone that’s definitely a step up from the Galaxy S2 but not in every regard with some features on par or below that of the S2.
Looking at the design of the Galaxy Nexus I couldn’t help but notice that it had sort of regressed back to the previous design style, being more like the Galaxy S rather than the S2. As it turns out this is quite deliberate as Samsung designed the Galaxy Nexus in such a way as to avoid more lawsuits from Apple. It’s rather unfortunate as the design of the Galaxy S2 is really quite nice and I’m not particularly partial to the rounded look at all. Still I can understand why they want to avoid more problems with Apple, it’s a costly exercise and neither of them are going to come out the other side smelling of roses.
Hand in hand with the Galaxy Nexus announcement Google has also debuted Ice Cream Sandwich, the latest version of the Android OS. There’s a myriad of improvements that I won’t go through here (follow the link for a full run down) but notable features are the ability to unlock your phone by it recognizing your face, integrated screen capture (yes, that hasn’t been a default feature for this long), a NFC sharing app called Android Beam and a better interface for seeing how much data you’re using that includes the ability to kill data hogging apps. Like the Galaxy Nexus itself Ice Cream Sandwich is more of an evolutionary step rather than being revolutionary but it looks like a worthy compliment to Google’s new flagship phone.
The Galaxy Nexus shows that Samsung is very capable of delivering impressive smart phones over and over again. The hardware, for the most part, is quite incredible bringing features to the table that haven’t yet been seen before. Ice Cream Sandwich looks to be a good upgrade to the Android operating system and coupled with the Galaxy Nexus the pair will make one very desirable smart phone. Will I be getting one of them? Probably not as my S2 is more than enough to last me until next year when I’ll be looking to upgrade again, but I can’t say I’m not tempted 😉
Whilst the Space Shuttle will always be one of the most iconic spacecraft that humanity has created it’s design was one of compromises and competing objectives. One of the design features, which influenced nearly every characteristic of the Shuttle, was the requirement from the Department of Defense that stipulated that the Shuttle needed to be able to launch into a polar orbit and return after a single trip around the earth. This is the primary reason for the Shuttle being so aeroplane like in its design, requiring those large wings so it has a long downrange capability so that it could return to its launch site after that single orbit. The Shuttle never flew such a mission, but now I know why the DoD required this capability.
It was speculated that that particular requirement was spawned out of a need to capture spy satellites, both their own and possibly enemy reconnaissance craft. At the time digital photography was still very much in its infancy and high resolution imagery was still film based so any satellite based spying would be carrying film on board. The Shuttle then could easily serve as the retrieval vehicle for the spy craft as well as functioning as a counter intelligence device. It never flew a mission like this for a couple reasons, mostly that a Shuttle launch was far more expensive than simply deorbiting a satellite and sending another one up there. There was also the rumour that Russia had started arming its spacecraft and sending humans up there to retrieve them would be an unnecessary risk.
The Shuttle’s payload bay was also quite massive in comparison to the spy satellites of the time which put further into question the DoD’s requirements. It seems however that a recently declassified spy satellite, called HEXAGON, was actually the perfect fit and could have influenced the Shuttle’s design:
CHANTILLY, Va. – Twenty-five years after their top-secret, Cold War-era missions ended, two clandestine American satellite programs were declassified Saturday (Sept. 17) with the unveiling of three of the United States’ most closely guarded assets: the KH-7 GAMBIT, the KH-8 GAMBIT 3 and the KH-9 HEXAGON spy satellites.
“I see a lot of Hubble heritage in this spacecraft, most notably in terms of spacecraft size,” Landis said. “Once the space shuttle design was settled upon, the design of Hubble — at the time it was called the Large Space Telescope — was set upon. I can imagine that there may have been a convergence or confluence of the designs. The Hubble’s primary mirror is 2.4 meters [7.9 feet] in diameter and the spacecraft is 14 feet in diameter. Both vehicles (KH-9 and Hubble) would fit into the shuttle’s cargo bay lengthwise, the KH-9 being longer than Hubble [60 feet]; both would also fit on a Titan-class launch vehicle.”
HEXAGON is an amazing piece of cold war era technology. It was equipped with two medium format cameras that would sweep back and forth to image an area, capturing an area 370 nautical miles wide. Each HEXAGON satellite carried with it some 60 miles worth of film in 4 separate film buckets which would detach from the craft when used and return to earth where they would be snagged by a capture craft. They were hardy little canisters too with one of them ending up on the bottom of an ocean but was retrieved by one of the navy’s Deep Submergence Vehicles. There were around 20 launches of the HEXAGON series of craft with only a single failure towards the end of the program.
What really surprised me about HEXAGON though was the resolution they were able to achieve some 30+ years ago. HEXAGON’s resolution was improved throughout its lifetime but later missions had a resolution of some 60cm, more than enough to make out people and very detailed images of say cars and other craft. For comparison GeoEye-1, which had the highest resolution camera on an earth imaging craft at the time of launch, is only just capable of a 40cm per pixel resolution (and that imagery is property of the USA government). Taking that into consideration I’m wondering what kind of imaging satellite the USA is using now, considering that the DoD appears to be a couple decades ahead of the commercial curve.
It’s always interesting when pieces of a larger puzzle like the Shuttle’s design start falling into place. Whilst it’s debatable whether or not HEXAGON (and it’s sister craft) were a direct influence on the Shuttle there’s enough coincidences to give the theory a bit of credence. I can see why the USA kept HEXAGON a secret for so long, that kind of capability would’ve been down right scary back in the 80’s and its reveal makes you wonder what they’re flying now. It’s stuff like this that keeps me obsessed about space and what we, as a species, are capable of.
No beating around the bush on this one, Steve Jobs has resigned:
To the Apple Board of Directors and the Apple Community:
I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.
I hereby resign as CEO of Apple. I would like to serve, if the Board sees fit, as Chairman of the Board, director and Apple employee.
As far as my successor goes, I strongly recommend that we execute our succession plan and name Tim Cook as CEO of Apple.
I believe Apple’s brightest and most innovative days are ahead of it. And I look forward to watching and contributing to its success in a new role.
I have made some of the best friends of my life at Apple, and I thank you all for the many years of being able to work alongside you.
The news shouldn’t come as a shock to anyone. Jobs has been been dealing with health problems for many years now and he’s had to scale back his involvement with the company as a result. The appointment of Tim Cook as the new CEO shouldn’t come as a surprise either as Cook has been acting as the interim CEO when Jobs has been absence during the past few years. Jobs’ involvement in Apple won’t completely cease either if the board approves his appointment which I doubt they’ll think twice about doing. The question on everyone’s lips is, of course, where Apple will go to from here.
The stock market understandably reacted quite negatively with Apple shares coming down a whopping 5.23% a the time of writing. The reasons behind this are many but primarily it comes down to the fact that Apple, for better or for worse, has built much of their image around their iconic CEO. Jobs has also had strong influences over the design of new products but Cook, whilst being more than capable of stepping up, has no such skills being more of a traditional operations guy. Of course no idea exists in a vacuum and I’m sure the talented people at Apple will be more than capable of continuing to deliver winning products just as they did with Jobs at the helm.
But will that be enough?
For the most part I’d say yes. Whilst the Jobs fan club might be one of the loudest and proudest out there the vast majority of Apple users are just interested in the end product. Whilst they might lose Jobs’ vision for product design (although even that’s debatable since he’s still on the board) Apple has enough momentum with their current line of products to carry them over any rough patches whilst they find their feet in a post Jobs world. The stock market’s reaction is no indicator of consumer confidence for Apple and I’m sure there’s only a minority of people who’ve decided to stop buying Apple products now that Jobs isn’t at the helm.
Apple’s current success is undeniably because of Jobs’ influence and his absence will prove to be a challenge for Apple to overcome. I highly doubt that Apple will suffer much because of this (the share price really only affects the traders and speculators) with a year or two of products in the pipeline that Jobs would have presided over. The question is will their new CEO, or any public face of Apple, be able to cultivate a similar image on the same level as Jobs did.