One of the first ideas that an engineer in training is introduced to is the idea of modularity. This is the concept that every problem, no matter how big, can be broken down into a subset of smaller problems that are interlinked. The idea behind this is that you can design solutions specific to the problem space rather than trying to solve everything in one fell swoop, something that is guaranteed to be error prone and likely never to achieve its goals. Right after you’re introduced to that idea you’re also told that modularity done for its own sake can lead to the exact same problems so its use must be tempered with moderation. It’s this latter point that I think the designers of Phonebloks might be missing out on even though as a concept I really like the idea.
For the uninitiated the idea is relatively simple: you buy yourself what equates to a motherboard which you can then plug various bits and pieces in to with one side being dedicated to a screen and the other dedicated to all the bits and pieces you’ve come to expect from a traditional smartphone. Essentially it’s taking the idea of being able to build your own PC and applying it to the smartphone market done in the hope of reducing electronic waste since you’ll only be upgrading parts of the phone rather than the whole device at a time. The lofty idea is that this will eventually become the platform for everyone and smartphone component makers will be lining up to build additional blocks for it.
As someone who’s been building his own PCs for the better part of 3 decades now I think the idea that the base board, and by extension the interconnects it has on it, will never change is probably the largest fundamental flaw with Phonebloks. I’ve built many PCs with the latest CPU socket on them in the hopes that I could upgrade on the cheap at a later date only to find that, when it came time to upgrade, another newer and far superior socket was available. Whilst the Phonebloks board can likely be made to accommodate current requirements its inevitable that further down the track some component will require more connections or a higher bandwidth interface necessitating its replacement. Then, just as with all those PCs I bought, this will also necessitate re-buying all the additional components, essentially getting us into the same position as we are currently.
This is not to mention the fact that hoping other manufacturers, ones that already have a strong presence in the smartphone industry, will build components for it is an endeavor that’s likely to be met with heavy resistance, if it’s not outright ignored. Whilst there are a couple companies that would be willing to sell various components (Sony with their EXMOR R sensor, ARM with their processor, etc.) they’re certainly not going to bother with the integration, something that would likely cost them much more than any profit they’d see from being on the platform.
Indeed I think that’s the biggest issue that this platform faces. Whilst its admirable that they’re seeking to be the standard modular platform for smartphones the standardization in the PC industry did not come about overnight and took the collaboration of multiple large corporations to achieve. Without their support I’m struggling to see how this platform can get the diversity it needs to become viable and as far as I can tell the only backing they’ve got is from a bunch of people willing to tweet on their behalf.
Fundamentally I like the idea as whilst I’m able to find a smartphone that suits the majority of my wants pretty easily there are always things I would like to trade in for others. My current Xperia Z would be a lot better if the speakerphone wasn’t rubbish and the battery was capable of charging wirelessly and I’d happily shuffle around some of the other components in order to get my device just right. However I’m also aware of the giant integration challenge that such a modular platform would present and whilst they might be able to get a massive burst of publicity I’m skeptical that it will turn into a viable product platform. I’d love to be wrong on this though but as someone who’s seen many decades of modular platform development and the tribulations it entails I can’t say that I’m banking money for my first Phoneblok device.
Ever since Elon Musk uttered the words Hyperloop in the middle of last year the tech world has been abuzz with speculation as to what it might actually be. Whilst it was known to be some kind of tube based transportation system the amount of specifics given out were incredibly slim which, of course, led to an incredible amount of hype over it. If anyone else had said something like this it would be easy to dismiss them but Musk, founder of SpaceX and Tesla, seems to have a knack for bringing seemingly crazy ideas to life. After a year of anticipation, teasing and rampant speculation Musk finally released the first iteration of his Hyperloop design and it’s quite impressive.
So it seems that the best speculators out there have got the design mostly right, it’s a low pressure tube system that could conceivably work both above and under ground and utilizes linear accelerators (I.E. railguns) to get them up to the required speed. The really interesting part of it however is the pod design as they’re what makes the whole system viable. You see in an a column of air like that contained within a hyperloop you’ll eventually end up pushing the entire column of air in front of you, not so great if you want to achieve high speeds. Hyperloop overcomes this by mounting an intake at the front that drives a compressor, effectively shunting all that air out of the way. At the same time the air being taken in is used to power the air bearings at the bottom of the craft.Additionally the pods get reboosted every 70 miles by linear actuators, reducing the power capacity required to power the compressors during travel.
What is quite impressive is the rather low power requirements for the passenger only version of the capsule needing only 100KW to keep it trucking along. That’s comparable to a typical 2 door hatchback engine which, as anyone who’s driven in one can attest to, struggles under the weight of 4+ passengers and cargo. However the combination of a low pressure environment, leading face intake and air bearings seems to be enough to reduce the total power requirements for staying at high subsonic levels dramatically. The variant with a vehicle compartment seems to up the power requirements dramatically however, requiring some 285KW to accomplish the same task.
The Hyperloop design also includes a whole bunch of other little innovations that make it quite appealing. Whilst it might be able to be done underground I can imagine doing so would be rather costly as digging tunnels is never cheap. However the above ground design looks like it could accomplish the same goal without requiring massive amounts of construction, even less than that of your typical highway. This is due to its monorail like construction utilizing pillars to elevate it above the ground. Such a system could then run along side established highways and any detours easily accommodated. The top surface of it could then have solar panels mounted on it providing the majority of the energy required to power the system making Hyperloop a very environmentally friendly transportation system.
Of course it’s still very much a theoretical system, albeit a thoroughly thought out it. Whilst I doubt it’ll end up replacing the high speed train link that Musk wants it to (even though he claims it would be cheaper and faster) once there’s a demonstration link up I can see people taking it very seriously. Heck we’ve been talking about high speed rail in Australia for decades and it’s always been killed because of the cost. Hyperloop could be the solution to that and we could finally get that MEL-CBR-SYD-BNE link everyone’s been wanting and not have the project go down in flames long before ground gets broken.
And yes I want that for almost entirely selfish reasons, flying to Sydney is almost not worth the effort
It’s no secret that I’m a big fan of my Samsung Galaxy S2, mostly because the specifications are enough to make any geek weak at the knees. It’s not just geeks that are obsessed with the phone either as Samsung has moved an impressive 10 million of them in the 5 months that its been available. Samsung has made something of a name for itself in being the phone manufacturer to have if you’re looking for an Android handset, especially when you consider Google used their original Galaxy S as the basis for their flagship phone the Nexus S. Rumours have been circulating for a while that Samsung would once again be the manufacturer of choice, a surprising rumour considering they had just sunk a few billion into acquiring Motorola.
Yesterday however saw the announcement of Google’s new flagship phone the Galaxy Nexus and sure enough it’s Samsung hardware that’s under the hood.
The stand out feature of the Galaxy Nexus is the gigantic screen, coming in at an incredible 4.65 inches and a resolution of 1280 x 720 (the industry standard for 720p). That gives you a PPI of 315 which is slightly below the iPhone 4/4S’ retina screen which comes in at 326 PPI which is amazing when you consider it’s well over an inch bigger. As far as I can tell it’s the highest resolution on a smart phone in the market currently and there’s only a handful of handsets that boast a similar sized screen. Whether this monster of a screen will be a draw card though is up for debate as not all of us are blessed with the giant hands to take full advantage of it.
Under the hood it’s a bit of a strange beast, especially when compared to its predecessors. It uses a Texas Instruments OMAP 4460 processor (dual core, 1.2GHz) instead of the usual ARM A9 or Samsung’s own Exynos SOC coupled with a whopping 1GB of RAM. The accompanying hardware includes a 5MP camera capable of 1080p video, all the usual connectivity options with the addition of NFC and wireless N and, strangely enough, a barometer. The Galaxy Nexus does not feature expandable storage like most of its predecessors did, instead coming in 16GB and 32GB variants. All up it makes for a phone that’s definitely a step up from the Galaxy S2 but not in every regard with some features on par or below that of the S2.
Looking at the design of the Galaxy Nexus I couldn’t help but notice that it had sort of regressed back to the previous design style, being more like the Galaxy S rather than the S2. As it turns out this is quite deliberate as Samsung designed the Galaxy Nexus in such a way as to avoid more lawsuits from Apple. It’s rather unfortunate as the design of the Galaxy S2 is really quite nice and I’m not particularly partial to the rounded look at all. Still I can understand why they want to avoid more problems with Apple, it’s a costly exercise and neither of them are going to come out the other side smelling of roses.
Hand in hand with the Galaxy Nexus announcement Google has also debuted Ice Cream Sandwich, the latest version of the Android OS. There’s a myriad of improvements that I won’t go through here (follow the link for a full run down) but notable features are the ability to unlock your phone by it recognizing your face, integrated screen capture (yes, that hasn’t been a default feature for this long), a NFC sharing app called Android Beam and a better interface for seeing how much data you’re using that includes the ability to kill data hogging apps. Like the Galaxy Nexus itself Ice Cream Sandwich is more of an evolutionary step rather than being revolutionary but it looks like a worthy compliment to Google’s new flagship phone.
The Galaxy Nexus shows that Samsung is very capable of delivering impressive smart phones over and over again. The hardware, for the most part, is quite incredible bringing features to the table that haven’t yet been seen before. Ice Cream Sandwich looks to be a good upgrade to the Android operating system and coupled with the Galaxy Nexus the pair will make one very desirable smart phone. Will I be getting one of them? Probably not as my S2 is more than enough to last me until next year when I’ll be looking to upgrade again, but I can’t say I’m not tempted
Whilst the Space Shuttle will always be one of the most iconic spacecraft that humanity has created it’s design was one of compromises and competing objectives. One of the design features, which influenced nearly every characteristic of the Shuttle, was the requirement from the Department of Defense that stipulated that the Shuttle needed to be able to launch into a polar orbit and return after a single trip around the earth. This is the primary reason for the Shuttle being so aeroplane like in its design, requiring those large wings so it has a long downrange capability so that it could return to its launch site after that single orbit. The Shuttle never flew such a mission, but now I know why the DoD required this capability.
It was speculated that that particular requirement was spawned out of a need to capture spy satellites, both their own and possibly enemy reconnaissance craft. At the time digital photography was still very much in its infancy and high resolution imagery was still film based so any satellite based spying would be carrying film on board. The Shuttle then could easily serve as the retrieval vehicle for the spy craft as well as functioning as a counter intelligence device. It never flew a mission like this for a couple reasons, mostly that a Shuttle launch was far more expensive than simply deorbiting a satellite and sending another one up there. There was also the rumour that Russia had started arming its spacecraft and sending humans up there to retrieve them would be an unnecessary risk.
The Shuttle’s payload bay was also quite massive in comparison to the spy satellites of the time which put further into question the DoD’s requirements. It seems however that a recently declassified spy satellite, called HEXAGON, was actually the perfect fit and could have influenced the Shuttle’s design:
CHANTILLY, Va. – Twenty-five years after their top-secret, Cold War-era missions ended, two clandestine American satellite programs were declassified Saturday (Sept. 17) with the unveiling of three of the United States’ most closely guarded assets: the KH-7 GAMBIT, the KH-8 GAMBIT 3 and the KH-9 HEXAGON spy satellites.
“I see a lot of Hubble heritage in this spacecraft, most notably in terms of spacecraft size,” Landis said. “Once the space shuttle design was settled upon, the design of Hubble — at the time it was called the Large Space Telescope — was set upon. I can imagine that there may have been a convergence or confluence of the designs. The Hubble’s primary mirror is 2.4 meters [7.9 feet] in diameter and the spacecraft is 14 feet in diameter. Both vehicles (KH-9 and Hubble) would fit into the shuttle’s cargo bay lengthwise, the KH-9 being longer than Hubble [60 feet]; both would also fit on a Titan-class launch vehicle.”
HEXAGON is an amazing piece of cold war era technology. It was equipped with two medium format cameras that would sweep back and forth to image an area, capturing an area 370 nautical miles wide. Each HEXAGON satellite carried with it some 60 miles worth of film in 4 separate film buckets which would detach from the craft when used and return to earth where they would be snagged by a capture craft. They were hardy little canisters too with one of them ending up on the bottom of an ocean but was retrieved by one of the navy’s Deep Submergence Vehicles. There were around 20 launches of the HEXAGON series of craft with only a single failure towards the end of the program.
What really surprised me about HEXAGON though was the resolution they were able to achieve some 30+ years ago. HEXAGON’s resolution was improved throughout its lifetime but later missions had a resolution of some 60cm, more than enough to make out people and very detailed images of say cars and other craft. For comparison GeoEye-1, which had the highest resolution camera on an earth imaging craft at the time of launch, is only just capable of a 40cm per pixel resolution (and that imagery is property of the USA government). Taking that into consideration I’m wondering what kind of imaging satellite the USA is using now, considering that the DoD appears to be a couple decades ahead of the commercial curve.
It’s always interesting when pieces of a larger puzzle like the Shuttle’s design start falling into place. Whilst it’s debatable whether or not HEXAGON (and it’s sister craft) were a direct influence on the Shuttle there’s enough coincidences to give the theory a bit of credence. I can see why the USA kept HEXAGON a secret for so long, that kind of capability would’ve been down right scary back in the 80′s and its reveal makes you wonder what they’re flying now. It’s stuff like this that keeps me obsessed about space and what we, as a species, are capable of.
No beating around the bush on this one, Steve Jobs has resigned:
To the Apple Board of Directors and the Apple Community:
I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.
I hereby resign as CEO of Apple. I would like to serve, if the Board sees fit, as Chairman of the Board, director and Apple employee.
As far as my successor goes, I strongly recommend that we execute our succession plan and name Tim Cook as CEO of Apple.
I believe Apple’s brightest and most innovative days are ahead of it. And I look forward to watching and contributing to its success in a new role.
I have made some of the best friends of my life at Apple, and I thank you all for the many years of being able to work alongside you.
The news shouldn’t come as a shock to anyone. Jobs has been been dealing with health problems for many years now and he’s had to scale back his involvement with the company as a result. The appointment of Tim Cook as the new CEO shouldn’t come as a surprise either as Cook has been acting as the interim CEO when Jobs has been absence during the past few years. Jobs’ involvement in Apple won’t completely cease either if the board approves his appointment which I doubt they’ll think twice about doing. The question on everyone’s lips is, of course, where Apple will go to from here.
The stock market understandably reacted quite negatively with Apple shares coming down a whopping 5.23% a the time of writing. The reasons behind this are many but primarily it comes down to the fact that Apple, for better or for worse, has built much of their image around their iconic CEO. Jobs has also had strong influences over the design of new products but Cook, whilst being more than capable of stepping up, has no such skills being more of a traditional operations guy. Of course no idea exists in a vacuum and I’m sure the talented people at Apple will be more than capable of continuing to deliver winning products just as they did with Jobs at the helm.
But will that be enough?
For the most part I’d say yes. Whilst the Jobs fan club might be one of the loudest and proudest out there the vast majority of Apple users are just interested in the end product. Whilst they might lose Jobs’ vision for product design (although even that’s debatable since he’s still on the board) Apple has enough momentum with their current line of products to carry them over any rough patches whilst they find their feet in a post Jobs world. The stock market’s reaction is no indicator of consumer confidence for Apple and I’m sure there’s only a minority of people who’ve decided to stop buying Apple products now that Jobs isn’t at the helm.
Apple’s current success is undeniably because of Jobs’ influence and his absence will prove to be a challenge for Apple to overcome. I highly doubt that Apple will suffer much because of this (the share price really only affects the traders and speculators) with a year or two of products in the pipeline that Jobs would have presided over. The question is will their new CEO, or any public face of Apple, be able to cultivate a similar image on the same level as Jobs did.
The current way of accessing space isn’t sustainable if we want to make it as a space fairing species. Whilst the methods we use today are proven and extremely reliable they are amongst the most inefficient ways of lifting payload into orbit around our planet, requiring craft that are orders of magnitude larger than the precious cargo they carry. Unfortunately the alternatives haven’t been too forthcoming, due in part to nuclear technologies being extremely taboo and the others still being highly theoretical. Still even highly theoretical ideas can have a lot of merit especially if they have smaller aspects that can be tested and verified independently, giving the overall theory some legs to stand on.
I’ve talked before about the idea of creating a craft that uses only a single stage to orbit (SSTO), in essence a craft that has only one complete stage and conceivably makes extensive use of traditional aerodynamic principles to do away with a lot of the weight that conventional rockets have. My proposal relied on two tested technologies, the scramjet and aerospike engine, that would form the basis of a craft that would be the Model T equivalent for space travel; in essence opening up space access to anyone who wanted it. In all honesty such a craft seeing reality is a long way off but that doesn’t mean people aren’t investigating the idea of building a SSTO craft using different technologies.
One such company is Reaction Engines, a name that I was only marginally familiar with before. They’ve got a proposal for a SSTO craft called Skylon that uses a very interesting engine design that combines both an air breathing jet engine as well as a traditional rocket motors. The design recently passed its first technical review with flying colours and could see prototypes built within the decade:
They want the next phase of development to include a ground demonstration of its key innovation – its Sabre engine.
This power unit is designed to breathe oxygen from the air in the early phases of flight – just like jet engines – before switching to full rocket mode as the Skylon vehicle climbs out of the atmosphere.
It is the spaceplane’s “single-stage-to-orbit” operation and its re-usability that makes Skylon such an enticing prospect and one that could substantially reduce the cost of space activity, say its proponents.
The engine they’re proposing, called Sabre, has an extremely interesting design. At lower speeds it functions much like a normal jet engine however as speeds approach Mach 5, the point at which my hand waving design would switch to a scramjet, it continues to operate in much the same fashion. They do however employ a very exotic cooling system so that the engine doesn’t melt in the 1000+ degree heat that would be blasting the components and once Skylon is out of the atmosphere it switches to a normal rocket engine to finish off the job.
The issues I see, that face nearly all SSTO designs, is the rule of 6 for getting to orbit. The rule simply states that at Mach 6 at 60,000 feet you have approximately 6% of the total energy required to make it successfully to orbit. Skylon’s engines operate in the jet mode all the way up to Mach 5 to an altitude of 85,000 feet which is no small feet in itself, but it’s still a far cry from the total energy required. It is true though that the first stages of any rocket are the most inefficient and eliminating them by using the atmosphere for both oxidiser and thrust could prove to be a real boon for delivering payloads into orbit. Still whether this will be practical with Skylon and the Sabre engine remains to be seen but there are tests scheduled for the not too distant future.
Walking through unknown territory like this is always fraught with unknowns so it’s no wonder that the team at Reaction Engines has been met with such skepticism over their idea. Personally I’m still on the fence as their technology stack is still mostly unproven but I applaud their vision for wanting to build the first SSTO craft. I’d love to see the Skylon making trips to the International Space Station, effectively replacing the shuttle and extending the ISS’ lifetime but until we see some more proof that their concept works I’m going to be skeptical, but it won’t take much to make into a believer
I’ve steered clear of saying anything to do with the iPhone 4 antenna issue that’s been making the rounds for the past month orso mostly because I believe it’s almost a complete non-story. It seems pretty obvious that they made the choice to put the antenna on the outside for aesthetic reasons (although there’s not a whole lot of other places it could of gone really) and unfortunately the kinds of testing done wouldn’t pick up on this issue. Still there seems to be as many people ready to leap on Apple for any issue as there are lining up to buy their products but those two groups never seemed to have a big cross section as they do today. The problem I have however is not so much whether or not there is a problem, more that a stream of fud has begun to come out of various media outlets and PR firms that confuses the issue at hand rather than solving the problem outright.
For those of you not in the know the iPhone 4 has it’s antenna laid bare to the world in the form of the metal bands that wrap around the outside of the handset. Due to the size constraints of the handset there’s really no where else to put them as the handset is quite thin and the additional electronics that Apple plugged into the new handset doesn’t leave any room for your traditional internal antenna. Like most modern handsets it actually has 3 separate antennas, with one being used for things like Bluetooth/Wireless/GPS and the other two for 2G/3G cellular communications. The separation of the cellular and other antennas is done because the antennas are tuned to a specific range of frequencies and the two cellular antennas are done to improve reception. Realistically the only difference between the iPhone 4′s antenna and any other phone is the fact that you can see and touch it, and that’s where the problems are starting to arise.
You see nearly every phone in the market today has their antennas on the inside of the phone, usually at the bottom of the handset to reduce the radiation levels. They are put inside the handset to make sure that nothing can interfere with them directly like say keys in your pocket or your hand. The iPhone 4′s antenna is completely exposed to the world with those sleek bits of aluminium being electrically conductive. Your hand is also a good conductor and when your hand comes into contact with it you actually form part of a circuit with your phone. This wouldn’t be too much of a problem since electricity takes the path of least resistance (and your hand has a higher resistance than the metal) but the fatal flaw in Apple’s design is the gap that is bridged when the phone is held in the left hand.
When you bridge this gap you are completing a circuit between the two cellular antennas that the phone has. This has the effect of detuning the antennas and significantly reducing their performance and reducing the amount of usable signal available to the phone. This is why the problem can be replicated by both holding it normally or simply bridging the gap between the two antennas. The solution is quite simple the antennas simply need to be isolated from the conductive surface of your hands which is why the bumper cases were so effective in solving the problem.
Jobs has taken the unfortunate route of saying that all phones suffer from this issue and unfortunately that’s just not the case.
Now before any of you go ahead and link me to videos of it happening on other handsets let me explain why that’s not the issue that’s affecting the iPhone 4. You see all phones will suffer attenuation in signal when you put your hands over the top of their antenna. That’s pure physics at work since the signal has to pass through your hand which is actually quite good at absorbing radiation. It then follows that you could “death grip” any phone by just finding where it’s antenna is and covering that place up. Hell check any phone manual and they’ll probably show you where it is and tell you not to cover it up.
However that’s a different problem to the antenna being detuned by you touching it. When your signal drops due to you holding your phone that’s not you detuning your antenna, that’s just the signal being dampened by the barrier of your hand. You can’t detune the antenna when you aren’t able to make electrical contact with it and that’s where those videos that Jobs showed at the press conference were misleading. The problem they have isn’t one of attenuation due to the human hand, it’s one of the antenna being thrown out of whack electrically.
There’s no doubt that Apple handled this badly and in their classic style they’ve attempted to muddle the issue at hand whilst making themselves look like the good guys. Granted their move of giving every iPhone 4 owner a free bumper is a good move and I applaud them for doing so. However their handling of it by trying to bring everyone down and spreading fud about the issue hasn’t done them any favours in my book, nor in anyone else’s as far as I can tell. Hopefully I’ve cleared it up for you so that you understand the difference between the death grip on the iPhone 4 and any other handset out there, rather than the crap that I’ve seen spouted over this issue.
Whilst I’m no stranger to the business world I’m still a new player when it comes to developing usable products for a wide audience. My years of training as an engineer and short stint as a project manager gave me a decent amount of insight into designing products and services for a customer who’s shovelling requirements at you but when it comes to designing something to requirements that are somewhat undefined you can imagine I found myself initially dumbfounded. It’s one thing to have an idea in your head, bringing it kicking and screaming into the real world is another.
For the most part I began with an initial concept and started to flesh it out as best I could. The original idea behind Geon was (in my head) called “What’s Going On?” whereby you could plonk down an area on a map and send a question to everyone running the application in the area. The people in the area then could, if they so wanted, respond back via their phone client with some text, image or video. The main idea was to get people communicating and secondary to that would be supplemental information from other sources. After socializing the idea a bit people seemed to think it would be an interesting service (although most declined to make serious comment until after they saw it in action) and the closest competitors looked to be throw-away applications that probably took the developers a couple weeks to slap together. Things were looking good, so I started hacking away.
Behold the horror that was my first attempt, something that I almost foolishy went ahead to try and promote amongst my favourite tech sites. The first iteration was a horrible compilation of ASP.NET and various client libraries that I managed to scrounge from all over the Internet. For the most part it worked as intended, being able to pick up information from various sources depending on your location. The problem was however it was ugly, unintuitive and relied rather heavily on my poor little web server to do all the heavy lifting. Additionally after walking a blogger friend of mine through using it he immediately suggested a couple features that had just never crossed my mind and upon consideration would be absolutely essential in high information density areas. They were so good that even the latest incarnation of Geon incorporates his suggestions.
Looking back over all my experience in designing solutions I realised that I had always been spoiled by having the problem handed to me on a silver platter. When you’re working for a client it’s pretty easy to figure out what they need when they’re telling you at every turn what they want. Sure it might be a hassle to make sure that they properly define their requirements but at least you have a definitive information source on what will constitute a successful outcome. When you’re working to develop something that you’re not quite sure who your client will be the game changes, and you find yourself looking around for answers to questions that might never have been asked before. Right now I find the majority of my answers through other people’s web services, hoping that emulating some of their characteristics will bring along with it some of their success.
At the core of all this is the software development philosophy of release early, release often. Whilst my product isn’t probably ready for prime time the more I show it to people who will (hopefully) end up as my users the more insight I get into what I should and shouldn’t be doing with it. Even better was discussing it with some of my proper software engineering friends who suggested different ways of doing some things which not only simplified my code (to the order of hundreds of lines, thanks Brett ) but also opened up services that up until now seemed baffling in the way they returned their data. I guess the lesson to take away from this is that the more you collaborate with others the better your end product will be which is hard for someone who’s as protective of his creations as I am.
I know I harp on a lot about Geon on this blog (and I’m sure you guys are sick of hearing about it!) but it has been the source of many eye opening moments and its all too easy to get caught up in the excitement of sharing something I created with the world. I was never that creative (I can’t draw, I’m not a very sporty person and my music creation skills have been in hiding since my debut song Chad Rock (that’s an anagram of the real title, FYI) has earned me unwanted infamy in my group of friends) and apart from this blog I’ve never really had any other creative outlets. I guess I just want to let the wider world how exciting it is to create something, even if I sound like a hyperactive 2 year old with a new toy
Plus the more I talk about it the more likely I am to work on it, since I feel guilty for being all talk and no action.
Take a good look at any big IT system and you can usually trace its roots to one of two places. The first is the one that all of us like to work with: the Greenfield project. In essence this is brand new work that has been born out of a requirement that didn’t exist before or a complete rethink of a current implementation. Talk to any consultant who’s trying to sell you some new tech and you can be guaranteed that they’ll be looking to sell you a greenfields solution, mostly because it’s cheaper and much easier for them to implement.
Sadly, and especially for those of us employed by the government, the majority of the projects that us IT guys will work on will never be greenfield situations and will usually be encumbered by some form of legacy system. This poses greater risks and constraints on what work can be done and ultimately you’re probably working to fix problems that someone else created. It’s been rare that I’ve been given the privilege of working on a project that was aimed at fixing my own mistakes, but I could put that down to my insatiable appetite for job hopping.
My own projects are a different beast as they are all my own work and thus all my own mistakes. Take for instance my intial foray into the world of web programming, Geon. Initially I decided that I’d code the whole thing in ASP.NET mostly because I could do it in C# (something I’m very familiar with) and there appeared to be a whole lot of resources available for doing the things I needed to do. For the most part that worked quite well and I was able to get the majority of the components up and running within a few weeks. Sure some of the subtleties of the design I had in my head didn’t quite work but for the most part I was able to get what I needed done, and even launch a few improvements along the way.
The transition was in fact a greenfields approach to the application. The initial iteration of Geon in Silverlight was, for the most part, a like for like system built upon a completely new code base. Whilst they share a common language the frameworks available and the UI design are wildly different. Still with a little effort I was able to replicate Geon into Silverlight in less than a weekend and everything seemed right with the world.
Then one day I had a fit of inspiration about a new layout for Geon. I quickly fired up Visio and started playing around with visual elements and cobbled together a better design. Everything seemed to be falling into place and I could see how it would be so much easier with this new design. Unfortunately this meant that the current code I had written for Geon in Silverlight was effectively unusable as the visual elements drove the underlying objects and thus couldn’t be used. The internal logic of some parts remained though and the new design took considerably less time to develop.
You might be noticing a couple patterns here. The first is (I’m going to start with the good here) that for the most part a lot of what I’ve created is reusable which is a classic example of modular programming at work. There was a bit of massaging between ASP.NET and Silverlight but thanks to Microsoft’s libraries this was fairly minimal. The second is I’m getting into a bit of a habit with starting a fresh each time I think of a new and better way of doing something, despite the amount of work that that entails.
I put this down to a form of analysis paralysis. In essence every time I’ve taken a long hard look at my code after a break from it the first thing I start to notice is how difficult it will be to get everything working just the way I have it in my head if I want to keep the current code. It stems from the way that I work on problems, by intensely focusing on a single problem until I have the solution. Whilst this usually ends up with an adequate solution to the problem I’ve often found myself spending a good 10 minutes on a function figuring out how exactly it does something. Repeat this for every function and in the end it becomes easier to just start over again instead of trying to rewrite everything so it fits together perfectly.
This all came to head when I started looking at the layout of Geon and realised that it had some inherit problems with viewing large amounts of information. Subsequently I’ve drawn up yet another design that is, you guessed it, almost wholly incompatible with the way I’m doing things now. I’ve since dedicated my weekend to developing the design and seeing how it works out but as you can imagine when I’m looking at dropping the code base for the 3rd time I start to question whether I’m really making any progress. Or maybe I’m just avoiding coding the real meat of Geon because it’s, you know, hard.
The good news is that the project manager in me isn’t going to be happy with feature creep and deadlines falling by the wayside so I firmly believe that this iteration of Geon will be the last major UI redesign before its final release on the world. This time I’ve made sure to include those “little things” like a user control panel which were strangely absent from my last 2 designs and hopefully I’ll achieve my goal of making the information much more visible than it currently is.
It really doesn’t help that my to-play list of games is getting longer every day either
Conjure up in your mind a picture of the humble nail cutter (or if you have one handy grab it!). Not only is this device a marvel of modern technology it also proves to be a useful example of what good engineering practices should be. Can you figure them out? The same question was posited to one of my classes when I was still in university, and none of us could come up with a good enough reason to satisfy our lecturer. If you take a step back and look at a nail cutter you notice something, there’s not a lot to them.
The majority of nail cutters are made out of a grand total of about 6 parts (Lever, top cutter, bottom cutter, file, front pin and rear rivet). Whilst the whole thing might appear simple on the surface it is indeed a feat of complex engineering. Each of the pieces serves up more than one function in order to achieve the end result. Our lecturer at the time had us try to imagine a nail cutter that’s design only let each piece perform a single function. The resulting contraption was a monstrosity of dozens of parts and if created would have been more than double the size of a convention nail cutter. This exercise was done to teach us the importance of modularity, and when its gone too far.
One of the very first methods you’re taught as an engineer for problem solving is to take what looks like a large problem and divide it into smaller and smaller sections until it becomes managable. We were first taught this in reverse with our first assignments usually serving as a basis for the rest of the semester. However early in our second year we were given what appeared to be almost impossible projects only to have small clues as to their solution taught to us in the weeks ahead. The problem is however, that when you take the modular design methodology too far you end up with innumerable small components which then changes your problem into one of integration. The nail clipper example showed us that you shouldn’t modularize a problem beyond what will allow you to solve it, for want of introducing complexity rather than removing it.
You can see this methodology applied almost everywhere, for better and for worse. It’s one of those problem solving skills that doesn’t get taught in school and really its one skill that I can’t imagine myself being without. If you take the time to analyze any problem you might have and break it down into its basic components nearly anything just becomes a matter of time, rather than brainpower.
Now, go forth and modularize my minions!