Ask any computer science graduate about the first programmable computer and the answer you’ll likely receive would be the Difference Engine, a conceptual design by Charles Babbage. Whilst the design wasn’t entirely new (that honour goes to J. H. Müller who wrote about the idea some 36 earlier) he was the first to obtain funding to create such a device although he never managed to get it to work, despite blowing the equivalent of $350,000 in government money on trying to build it. Still modern day attempts at creating the engine with the tolerances of the time period have shown that such a device would have worked should have he created it.
But Babbage’s device wasn’t created in a vacuum, it built on the wealth of mechanical engineering knowledge from the decades that proceeded him. Whilst there was nothing quiet as elaborate as his Analytical Engine there were some marvellous pieces of automata, ones that are almost worthy of the title of programmable computer:
The fact that this was built over 240 years ago says a lot about the ingenuity that’s contained within it. Indeed the fact that you’re able to code your own message into The Writer, using the set of blocks at the back, is what elevates it above other machines of the time. Sure there were many other automata that were programmable in some fashion, usually by changing a drum, but this one allows configuration on a scale that they simply could not achieve. Probably the most impressive thing about it is that it still works today, something which many machines of today will not be able to claim in 240 years time.
Whilst a machine of this nature might not be able to lay claim to the title of first programmable computer you can definitely see the similarities between it and it’s more complex cousins that came decades later. If anything it’s a testament to the additive nature of technological developments, each one of them building upon the foundations of those that came before it.
There’s a lot of things in this world that I think I have a sound understanding of that, usually after a Wikipedia binge or YouTube bender, just aren’t inline with reality. These usually aren’t fundamental things (although my recent dive into corporal discipline of children was something of an eye opener) but more and more I find myself astonished at just how wrong my intuition can be. The most recent example is the simple petrol pump and the mechanism that stops the flow when your tank is almost full.
So in my engineer brain I figured that there was some kind of sensor embedded in the end of the nozzle and, upon fuel reaching the outside of the nozzle the pump would be alerted, stopping the flow. Of course I often wondered how they managed to detect fuel on the outside of the nozzle whilst ignoring the inside but I figured that there were people much smarter than me working on that problem and it was a simple matter of engineering. Of course I was right about the latter but I never expected a fully mechanical solution to it, especially one not as elegant as they show in the video.
It really is true what they say about what happens when you assume something
This requires no introduction, just watch:
As a performance this is pretty amazing as the extensive use of optical illusions to generate a feeling of depth where there is none surpasses anything that I’ve seen before. It gets even more impressive when you find out that all of it was done in camera, I.E. none of the effects you see on there have been edited in. Initially I was a little sceptical of that, I mean this kind of stuff is child’s play to anyone with Blender and some 3D tracking software, but once I saw the robotic arms in the background I immediately understood how everything fit together and it’s incredibly impressive.
There’s 2 key components at work here the first of which is the IRIS robotic arm from Bot and Dolly. They’re essentially scaled down industrial robots with several pivot points allowing them to move freely in 3D space. These are what are holding the two white panels where most of the magic happens and you can see that they’re quite agile even with their considerable bulk. The magic here is though that the camera is also held on one of them which is what allows the next piece of technology to really shine.
As you can probably guess there’s 2 projectors (at least, there could be more) which are responsible for all the visual imagery you see: one behind the camera and one pointing down onto the floor. Now what makes all of these crazy images possible is the fact that the IRIS arms can report their exact location in three dimensions, allowing the projectors to then display images with the required perspective to generate the illusions. It’s similar to the WiiMote head tracking application that came out a while back as the demo makes use of the same principles to generate the illusion of depth.
Another cool application of robots like this is introducing motion into high speed camera shots. Traditionally high speed video usually remains static as moving the camera fast enough to get any kind of good perspective in them is nigh on impossible. This demo reel from THE MARMALADE shows a very similar kind of robot that they use to do high speed video that has significant amounts of motion in it. The result is so foreign that it feels like it’s in the bottom of the uncanny valley for me but it’s still very impressive.
With CGI being par for the course these days any you can’t be blamed for thinking that anything you see is fake. I think that’s why effects that are achieved without the use of computer trickery are so impressive, much in the same way as games that forego modern graphics but still manage to create an intriguing experience. Probably one of the coolest effects I’ve seen recently is the use of sound waves that are at a very similar frequency to the frame rate of the camera being used which ends up producing some pretty weird and wonderful effects.
Below is the latest one I’ve come across, and it’s pretty awesome:
As the video alludes to the effect would appear to stem from the rolling shutter that CMOS based cameras use to create images. What’s happening is when the image is read off the sensor its done line by line and then reconstructed into a full image. However because of the way the sensor is read this allows the image to change during exposure which gives rise to all sorts of weird and wonderful effects. In this particular video it has the effect of making the speaker cone appear to have a wave travelling through it, rather than it slowly moving in and out like the creator expected. This has since been confirmed in other videos as rotating the camera shows the effect tracking the camera’s point of view.
Other interesting effects you can get is “freezing” the motion of water using a similar technique. If you fool around with frequencies slightly you can also get all sorts of other weird behaviour like water appearing to defy gravity. These are all based on sound waves however but anything that has a periodicity to it will allow you to make some really cool effects with cameras that use a rolling shutter.
As most readers are aware I’m an incredibly amateur photographer having dabble in it on and off again for the past 5 years but only really started taking it seriously towards the end of last year. I’m still very much in the early stages of my understanding as whilst I can produce some pictures that I (and others) like my hit rate still feels incredibly low, especially when I set out to create a very specific image. A lot of that is comes from my still nascent understanding of how to light subjects properly and how the direction/intensity changes the resulting image.
Now whilst the following video isn’t exactly the greatest introduction on how you should go about lighting your subject (in this a model’s face) it does showcase just how dramatically you can change the resulting image simply by moving the light source:
Showing this to my wife she was adamant that they were splicing video together with different models as the changes are quite dramatic. It is the same person however as if you look at the eyes you can see the light source rotating at a rather impressive clip which is what gives rise to the dramatic changes in shadows. Pausing at different sections also makes it quite clear what the impacts of the direction of light are and how they are reflected in the final image.
I wonder what the effect would be if instead of moving the light they used multiple sources then just cycled through them. Hmmmmm…….
On recommendation of a friend I recently watched a documentary called Side by Side which details the history of the primary technology behind the cinema: the cameras. It starts off by giving you an introduction into the traditional photographic methods that were used to create films in the past and then goes on to detail the rise of digital in the same space. Being something of a photographic buff myself as well as a technological geek who can’t get enough of technology the topic wasn’t something I was unfamiliar with but it was highly interesting to see what people in the industry were thinking about the biggest change to happen in their industry in almost a century.
Like much of my generation I grew up digitally with the vast majority of my life spent alongside computers and other non-analog style equipment. I was familiar with film as my father was something of a photographer (I believe his camera of choice was a Pentax K1000 which he still has, along with his Canon 60D) and my parents gave me my own little camera to experiment with. It wasn’t until a good decade and a half later that I’d find myself in possession of my first DSLR and still not another few years until after then that I’d find some actual passion for it. What I’m getting at here is that I’m inherently biased towards digital since it’s where I found my feet and it’s my preferred tool for capturing images.
One of the arguments that I’ve often heard levelled at digital formats, both in the form of images and your general everyday data, is that there’s no good way to archive it in order for future generations to be able to view it. Film and paper, the traditional means with which we’ve stored information for centuries, would appear to archive quite well due to the amount of knowledge contained in those formats that has stood the test of time. Ignoring for the moment that digital representations of data are still something of a nascent technology by comparison the question of how we archive it has come up time and time again and everyone seems to be under the impression that there’s no way to archive it.
This just isn’t the case.
Just before I was set to graduate from university I had been snooping around for a better job after my jump to a developer hadn’t worked out as I planned. As luck would have it I managed to land a job at the National Archives of Australia, a relatively small organisation tasked with the monumental effort of cataloguing all records of note that were produced in Australia. This encompassed all things from regular documents used in the course of government to things of cultural value like the air line tickets from when the Beatles visited Australia. Whilst they were primarily concerned with physical records (as shown by their tremendous halls filled with boxes) there was a small project within this organisation that was dedicated to the preservation of records that were born digital and were never to see the physical world.
I can’t take much credit for the work that they did there, I was merely a care taker of the infrastructure that was installed long before I arrived but I can tell you about the work they were doing there. The project team, consisting mostly of developers with just 2 IT admins (including myself), was dedicated to preserving digital files in the same way you would do with a paper record. At the time a lot of people were still printing them off and then archiving them in that way however it became clear that this process wasn’t going to be sustainable, especially considering that the NAA had only catalogued about 10% of their entire collection when I was there (that’s right, they didn’t know what 90% of the stuff they had contained). Thankfully many of the ideas used in the physical realm translated well to the digital one and thus XENA was born.
XENA is an open source project headed by the team at NAA that can take everyday files and convert them into an archival format. This format contains not only the content but also the “essence” of the document, I.E. it’s presentation, layout and any quirks that make that document, that document. The viewer included is then able to reconstruct the original document using the data contained within the file and since the project is open source should the NAA cease development on the project the data will still be available for all of those who used the XENA program. The released version does not currently support video but I can tell you that they were working on it while I was there but the needs of archiving digital documents was the more pressing requirement at the time.
Ah ha, I’ll hear some film advocates say, but what about the medium you store them on? Surely there’s no platform that can guarantee that the data will still be readable in 20 years, heck even 10 I’ll bet! You might think this, and should you have bought any of the first generation of CD-Rs I wouldn’t fault you for it, but we have many ways of storing data for long term archival purposes. Tapes are by far the most popular (and stand the test of time quite well) but for truly archival quality data storage that exists today nothing beats magneto-optical discs which can have lives measured in centuries. Of course we could always dive into the world of cutting edge science for likes like a sapphire etched platinum disc that might be capable of storing data for up to 10 million years but I think I’ve already hammered home the point enough.
There’s no denying that there are challenges to be overcome with the archival of digital data as the methods we developed for traditional means only serve as a pointer in the right direction. Indeed attempting to apply them to digital the world has often had disastrous results like the first reel of magnetic tape brought to the NAA which was inadvertenly baked in an oven (done with paper to kill microbes before archival), destroying the data forever. This isn’t to say we don’t have anything nor are we not working on it however and as technology improves so will the methods available for archiving digital data. It’s simply a matter of time until digital becomes as durable as its analogue counterpart and, dare I say it, not long before it surpasses it.
Want to feel really insignificant for a bit?
I don’t know what it is but things like the galaxy IC1101, VY Canis Majoris and all other heavenly bodies that are just beyond anything that I’m capable of imagining captivate me completely. I think it’s probably due to the possibilities that arise from such scale. Just think about it, if one planet in one lonely solar system was able to produce a species like us what kind of life could have formed in these other places. Could it even happen? Would we be able to recognise it if we saw it? The possibilities are nearly endless and that, for me at least, is wildly fascinating.
It’s that desire to find out what’s out there that fuels my passion for transhumanist ideals. Whilst many will argue that ageing and death are a natural part of life that should not be circumvented I instead ask why you want to limit your experience to one life time, especially when the universe is so vast as to provide nearly limitless opportunities for those who wish to explore it.
Some find that incomprehensible scale intimidating, I find it invigorating.
Things like this never fail to bring me to tears:
It’s not the most original video on the planet (or off, as the case might be) but it’s probably one of the most memorable ones of these edge of space type deals. The train’s face is CGI but the rest of it is completely real, done in a process that can be replicated on the cheap if you know what you’re doing. There are however a couple nits that I like to pick about videos like these mostly around what people tend to classify as “space”.
The international defined standard for being in space and not in Earth’s atmosphere is defined as 100KM above sea level, referred to as the Kármán line. The most exotic of helium ballons will only manage to make it about halfway to that point before bursting and falling back down to earth. Whilst the atmosphere at those heights wouldn’t support life for any length of time and you can clearly see the curvature of the Earth it’s not in space unless you’re past that point. Even saying you’re at the edge of the space is a little on the nose, but I’ll usually let that slide.
Despite all that I still love videos like this as they really put the whole world in perspective. That feeling has a name too, the overview effect, which many astronauts have reported feeling upon seeing the Earth from space or on the lunar surface. It’s my hope (and running bet with a friend) that I’ll one day see the earth from that perspective too.
I usually reserve these kinds of things for a quick tweet or Facebook post but I figured it was time I actually explained the creation of these particular videos. Shown below for your viewing pleasure is yet another Curiosity descent video that makes for some incredible watching:
For starters the first thing I’ll let you in on is that all the sound you hear in this video is 100% fake as Curiosity does not have a microphone on board. That may seem strange, I mean what camera that can take video doesn’t have one, but they’ve launched craft to Mars with microphones before (the Mars Polar Lander was one, although it was tragically lost, with the Phoenix Lander being one that actually made it) and the recordings made back then weren’t particularly interesting. Most of the noise that they recorded was akin to static and really didn’t have much use scientifically so future Mars craft like Curiosity don’t carry them so they can use the payload space for more experiments. Additionally the actual sound would probably be a lot more harsh (ever heard a microphone in high wind?) as at this stage Curiosity was rocketing towards Mars at a pretty decent rate.
The original video, shown here, is based off the images from the MARDI camera that’s on the bottom of the rover specifically for this purpose. Now I’ve heard differing reports as to what the actual frame rate was as the original video says it’s somewhere on the order of 2 FPS (297 images over 150 seconds) but most are quote as saying its 4FPS. The imager itself is capable of doing up to 10FPS but I don’t believe it was for this particular video. How then, you might be wondering, do they manage to get something like 20 FPS like the video does above? Well the original video is probably the best candidate for something called Video Interpolation (or inbetweening as its usually referred to).
In essence the additional frames are generated from the frames either side of it and the algorithms are essentially guessing what’s going to come next. For the MARDI images this works quite well as the amount of change between frames is quite low and thus the interpolation between frames looks quite good. Most of the better ones of these also have a lot of hand work with them as well to smooth out some things (like the heat shield falling motion). If there’s a lot of action between frames you tend to get smudging which you can actually see hints of in the video (look at the landscape shifting about as it gets closer). It works on any kind of video too and a lot of enterprising YouTubers use it in order to get that slow motion effect without having to spend the untold thousands on high speed video cameras.
I find the videos interesting both because of what they are (technical achievements in both their creation and interpolation) and what they represent to us as species. The response to the Curiosity videos has been nothing short of amazing and it makes me so happy to see so many being inspired by it. It’s things like this that spur on the next generation to become the kinds of people capable of making things like this and it never fails to impress me time and time again.
This video is awesome not just because they built a water slide that lets you do a loop the loop but because it’s a very simple demonstration of the centripetal forces that are in play. You’ll notice that there’s quite a bit of lead up to the actual loop itself, a requirement so that when you start to loop up the sum of the forces ensures that you can overcome the effects of gravity. Too little and you’d only find yourself getting part way around the loop before tumbling down. Too much and you’d risk breaking the supporting structure but you’d have to be going at quite a clip to accomplish that.
If you want to see a good demo of the forces in action the Physics Classroom has a good post on it.