There has been no doubt in my mind for a long time that homeopathy is total bunk. For it to work as it’s supposed to several laws of physics must be violated and our understanding of the human immune system thrown out the window. I have no issue with people self-prescribing these things however the fact that many practitioners advocate their remedies in favour of actual medicine is what draws my ire. Thankfully the Australian government has begun to show an intolerance for such charlantry and recently commissioned a review of the research done on homeopathy. The results are, unsurprisingly, not in homeopathy’s favour finding that in their review of the literature that it is no more effective than a placebo for a total of 68 illnesses.
The study was conducted by the National Health and Medical Research Council, Australia’s peak funding body for medical research that oversees some $700 million in funding per year. Their study included materials from 57 systemic reviews which covered some 176 individual studies. Additionally information that was submitted directly to the NHMRC through its public consultation phase was also included. Studies were only concluded if they were well designed (done by comparing them to international standards for conducting such trials) and placebo controlled. The results showed that, for all well designed and properly controlled studies, there was no evidence to suggest that homeopathy was any more effective than a placebo was. Indeed the only positive results were found in studies of poor quality and design which would likely have led to spurious results that were not supported by data. For the remaining studies there was simply not evidence to make a conclusion one way or the other.
Whilst these results are unsurprising it does beg the question about the regulation of things such as this. Australians spend some $10 million a year on these remedies a figure that continues to climb every year. However the body of evidence is so strong against them that it begs the question about whether they should be sold at all. I think they get a pass since they really have no potential to cause harm in and of themselves however it’s the abstinence from proper medicine that has the real potential to cause harm. So potentially we need to regulate against the practitioners rather than the remedies themselves.
It feels like beating a dead horse at this point but the fact that homeopathy is still around, and becoming more popular, shows that research like this is needed. I know it won’t convince everyone but hopefully those who are on the fence about it will be convinced that homeopathy is total bunk.
Wave interference is a relatively simple scientific concept that can be difficult to grasp at first. Many are introduced to the idea in high school or college physics, usually being shown something like the double slit experiment. Whilst this is a great demonstration of the wave properties of light it’s not exactly obvious how the constructive and destructive interference actually works. Something like the following video, I feel, gives a far better visual impression of what wave interference and superpositioning does in the real world.
The really cool demonstration comes in at about 55 seconds in where they demonstrate a concentric wave singularity, or what they call “The Spike”. Basically they make the waves work in such a way that once they meet in the middle they all interfere with each other at just the right point. This results in the rapid formation of a cavity in the middle which is then slammed shut as the waves return to their peak. The resulting geyser flows upward for far longer than you’d expect it to which is a great demonstration of the power of constructive interference with waves.
FloWave itself was constructed to replicate currents and waves seen in the ocean. This allows companies and researchers to test out their technologies in a controlled environment before they get deployed offshore, potentially saving costly repairs and re-engineering. That means that it’s mostly used to test out how things respond to various kinds of waves and currents, rather than generating awesome wave spikes that shoot water several stories into the air. Still I’d love something like this on a smaller scale to do my own demonstrations of wave interference.
The announcement from the researchers behind Laser Interferometer Gravitational-Wave Observatory (LIGO) that they had directly observed gravitational waves. It’s an amazing achievement, one made all the pertinent by the fact that it was made 100 years after Einstein predicted it with his theory general relativity. It was the last remaining piece of the theory which had yet to be observed and with LIGO’s results it’s finally complete. However this is far from being the end of research into gravitational waves and there are some incredibly ambitious missions planned with one already on its way.
LISA Pathfinder, pictured above, was launched on December 3rd, 2015. Inside the craft are two small test masses which are sitting on opposite ends of the craft, 40cm apart. It arrived at its destination, a special place called the Sun-Earth Lagrange point 1 (chosen due to the fact that the gravity of the sun and earth cancel each other out) on the 22nd of January 2016. After it has been commissioned it will set those two mass free, allowing them to experience near perfect free fall. It will then attempt to measure the distance between both of them using the same kind of laser interferometry that the LIGO detector used here on earth. It will also test various systems to account for other forces that are acting both on the craft and the test masses as well as providing insight into the longevity such systems will have in space. It’s essentially a smaller version of LIGO in space, one that will be critical for further planned missions.
As its name implies LISA Pathfinder is the trailblazer for another, much more ambitious craft that’s scheduled to be launched in 2034. LISA Pathfinder should be able to provide evidence that the systems work as intended although I wasn’t able to find any official source that said it will definitely provide direct observations of gravity waves itself. Indeed LIGO has been running since 2002 and was unable to detect anything until the recent upgrade was completed in September 2015. However the data provided by those observations helped in determining what level of detection was required and its likely that LISA Pathfinder will provide the same assurance for its successor craft, eLISA.
Comparatively LISA Pathfinder and eLISA are not even in the same ballpark. Where LISA Pathfinder has 2 small masses separated by 40cm eLISA will have 3 distinct craft, each carrying a 2KG weight and separated by 1 million kilometres. The principals behind them are the same however as they will precisely measure the distance between each other using laser interferometry. eLISA will be able to detect gravity waves at a much lower frequency than its ground based peers, allowing us to see a much wider range of events that generate them. For comparison LIGO can only detect frequencies about 10 orders of magnitude higher than what eLISA will be able to, a significant improvement in sensitivity.
Suffice to say it’s an incredibly exciting time for researchers in the world of general relativity. With the foundations of the theory backed up with observational data there’s now a whole world of new physics for them to explore. Soon there will be troves of data for them to pour through, much of which will be used to design the eLISA craft. Whilst it’s going to be some time before we see eLISA launching into space we at least know that when it does it will be able to provide us incredible insight into our universe.
Aging, as a process, still remains a mystery to modern science. We know that it’s not just one thing that causes the symptoms of aging which is what makes it so hard to find a miracle compound that erases everything. Still we’ve made some pretty good progress in combating some parts of the aging process, many of which can be used to make our lives not only longer but also far more healthier throughout. The latest research from scientists at the Mayo clinic shows yet another potential pathway for delaying the onset of age related diseases and conditions, giving mice up to 35% longer lives.
The mechanism that the researchers focused on is called cellular senescence. Our cells constantly reproduce themselves through division, a process which repeats for each cell approximately 40 to 60 times before it enters a stage called cellular senescence. In this stage the cell’s telomeres, a kind of nucleotide that protects a cell’s DNA from damage, is shortened to the point where it can no longer provide the protection the cell needs. In this stage the cell will no longer divide but still remains active. Eventually these senescent cells are cleaned out by the body’s immune system but as we age this process starts to slow down and become less efficient.
The Mayo researchers used an existing transgene line, called INK-ATTAC, to induce cell death in these senescent cells. This was triggered by twice weekly injections into two different lines of lab mice who were then compared to a control. The results were incredibly impressive, showing an improvement in overall lifespan of the mice from 17% to 35%. The mice also showed no side effects from the treatment with healthy major organ function retained throughout their extended life. Suffice to say a treatment of this nature would appear to be of incredible benefit to many, especially those who are seeking more healthy years than just an extended lifespan.
Such a treatment is probably many years away from reaching humans however, mostly due to the fact that the use of transgenes in humans is still an open area of bioethical debate. Indeed whilst the consensus for using such treatments for curative purposes appears to be largely agreed upon therapeutic uses such as these are still something of a grey area. Transgenes like this one are still very much an area of active research however and there are likely to be many more such treatments like these developed in the coming years. Hopefully the regulatory and ethical frameworks will be able to keep up with the rapid pace of innovation as treatments like these are invaluable in treating the one condition that affects all humans universally.
Some of my favourite demonstrations of scientific principles are ones that you expect to behave one way but, in reality, act completely different. To me this demonstrates the value of experimentation and observation as you can never be sure until you do something for yourself. It also usually means that there’s some kind of interesting physical phenomena at play that I’m not yet familiar with, something which usually means an enjoyable trip down a Wikipedia hole. The following video is one such demonstration, showcasing an interesting property of amorphous metals.
In this demonstration (the whole channel is worth watching by the way) we can see the difference between an amorphous metal surface and a traditional one when a ball bearing is dropped on it. The difference in bounce height is quite staggering, enough to make you think initially that there’s some form of spring hidden in the cylinder. The actual reason for the difference, which is briefly touched on in the video, is far more interesting than it being a simple trick.
The material that the atomic trampoline is made of has some rather unique properties. Regular metals are usually of a crystalline structure meaning that their component atoms are highly ordered. Amorphous metals on the other hand (sometimes referred to as metallic glass) have a highly disorganised structure, owing to the fact that they’re usually alloys (made up of several different metals) and their creation process stops the formation of a crystalline structure.
This disorganisation prevents the formation of defects called dislocations which appear in crystalline metals. When a ball bearing strikes the regular metal surface these dislocations glide through the other parts of the metal’s structure, dissipating a lot of the energy. In the amorphous metal however there are no such dislocations and so much less of the energy is lost with each bounce. Of course the lack of dislocations does not negate other losses due to sound and heat which is why the ball bearing doesn’t bounce infinitely.
What I’d love to see is the same experiment redone in a vacuum chamber with both the ball bearing and the surface made from amorphous metals. I’m sure we could get some really absurd bouncing times with that!
Dry ice is a very interesting substance, both from a scientific and “it’s just plain cool” point of view. Many are familiar with the billowing clouds of smoke it can produce when placed in water, seemingly a staple of anything that needs to be made to look spooky. Others will know it for its culinary applications, able to cool things down far more rapidly than any fridge or freezer. However whats less understood is the mechanisms of how dry ice actually works which is what can produce some rather interesting effects like those shown below.
Dry ice is the solid form of carbon dioxide which, thanks to its useful properties, has found many everyday applications. It’s also quite easy to manufacture as carbon dioxide is a byproduct of many other processes. This gas is then trapped and pressurized, changing it into a liquid form. Then the pressure is released, causing some of the liquid to boil off which rapidly cools down the remaining liquid. This then forms a kind of carbon dioxide snow which can then be compressed into blocks or small pellets. Industrial applications often use the large blocks whilst the pellets are used in more everyday applications.
The video above demonstrates a property of dry ice that’s not completely obvious if you don’t know what to look for. Carbon dioxide doesn’t have a liquid state at atmospheric pressures which means that it transitions directly from a solid to a gas, bypassing the liquid phase. This process is called sublimation and means that the entire surface of dry ice is constantly emitting carbon dioxide gas. When you put something on top of it, like a large metal part shown above, the gas has to squeeze past the surface in order to escape. This is akin to pulling the ends of a balloon apart to make that loud screeching noise which is why this part appears to “scream”.
There are many other videos of people producing similar effects with dry ice and other metal objects like spoons and pennies. One interesting thing I noted from some of the other ones that the screaming effect would often stop after a short period of time. I believe this is due to the metal’s temperature approaching that of the dry ice which means that the dry ice no longer sublimates. The part in the video above is likely carrying quite a bit of heat which is why the screaming continues on for so long.
Quite fascinating, if I do say so myself.
Everyone knows the standard static electricity experiment. You grab yourself a perspex rod and a wool cloth and, after some vigorous rubbing (often with a few innuendos thrown in), suddenly your perspex has the ability to attract pieces of paper. Most people will also understand the mechanism of action, the transference of charge that leaves the rod negatively charged and the cloth positively charged. What most people won’t know however is that friction isn’t required to generate a static charge. This is what can lead to hilarious situations like the one in the video below:
So if friction isn’t a requirement for generating a static charge how do these address labels get it? The answer is actually pretty interesting and has to do with the way adhesives work. For these address labels the adhesion comes from a chemical reaction, meaning that the address labels had a form of bond with the backing before they were torn apart. When this bond is broken both materials will gain or lose electrons, depending on where the material sits on the triboelectric series. I’d hazard a guess that the material that the address labels is made up of tends more towards the negative end of the spectrum, meaning that bin holds a strong negative charge.
This is what is responsible for the labels floating around in a seemingly random fashion before ejecting themselves out onto the floor. The effect wouldn’t have been immediate, each label would only carry a small negative charge, however past a certain point the negative field would have become big enough to repulse the small weight of each of the labels. If they were so inclined they could throw a positively charged piece of plastic in there and they’d all be attracted which would also be pretty interesting.
Or, if you wanted some real fun, if they rubbed their head with a balloon and then dunked themselves in there all the labels would gleefully stick to them. Not that that proves much, just that it’d be hilarious to see someone with shipping label backing stuck to them.
Before we get started let me just put this here:
LARGE PLOT SPOILERS BELOW FOR THE NEW STAR WARS MOVIE
There, now that’s out of the way let’s get onto the meat of this post.
I, like all Star Wars fans, had been very much looking forward to the latest movie. Whilst I have my reservations about some aspects of it (which I’ll reserve for a conversation over a couple beers as to avoid a flamewar on here) I still thoroughly enjoyed it. However like most sci-fi movies The Force Awakens plays fast and loose with science. Following the rules of our universe when it suits the plot and sweeping them under the rug when it doesn’t. There are some grievances that I’m willing to let slide in this respect, this is fiction after all, however there’s at least one egregarious scene in which physics is completely thrown out the window when it really didn’t need to be.
My grievance lies with Starkiller base, the bigger and badder version of the Death Star which now encompasses an entire planet rather than just a small artificial moon. Whether such a device is something that could be built is something I’m willing to gloss over however the fact that it’s powered by drawing off mass from its neighbouring star brings with it a few niggling questions. It’s ultimate destruction, which then brings about the resurrection of its parent star, is also not something that would happen and not something I’d be willing to write off with space magic.
We get to see Starkiller base fire once and then begin preparations for firing again. Assuming that it didn’t travel to a new star in the interim (I don’t remember that being indicated as such) then it would’ve consumed half of its parent star’s mass to fire that single shot. That would’ve caused all sorts of grief for everything in orbit around it, not to mention the fact that that mass is now present on Starkiller base itself. Any asteroid or other debris near by should have rocketed down to the surface with incredible speed, laying waste to the surface. I’m willing to give a pass for a “gravity pump” or something else on the inside parts but being able to negate half the mass of a star over the entire planet is pure fantasy rather than a stretch of fiction.
However the final destruction of Starkiller base is the most egregious flaunt of the laws of physics. Putting aside all the mass contained within the star issue for a moment when it was all released the result would not be a new sun just like the old one. Whilst the mass was likely not compressed past its Schwarzschild radius (I’m assuming it’s a Sun like star) it would still be far too compressed to simply balloon back out. Instead it would likely become a white dwarf, that is if the explosion wasn’t violent enough to simply disperse the star’s material across its solar system. Since the system that Starkiller base resides in was never named I’m hazarding a guess it’s not relevant to the future plot so the returning sun just seems like a little bit of laziness more than anything else.
Of course I’m not advocating for 100% scientific accuracy in all films (indeed I don’t think there’s one good sci-fi epic that does) however a few nods here or there wouldn’t go astray. There are certain times where scientific accuracy would harm the plot and in that case I’m fine to relinquish it to induldge in the fantasy. Other times however it would do no harm and provide an interesting talking point as sometimes the physical reality can be far more interesting than the fantasy.
Climate change is happening, there’s no doubt about that, and the main factor at play here is us. The last decade has seen an increase in the frequency and severity of weather events all of which can be traced to the amount of carbon we dump into the atmosphere. Thankfully the Paris climate deal is a good first step towards remediating the problem, even if the majority of the provisions in there aren’t legally enforceable. Until we start true action though extreme weather events will lead to things like below, where a river of ice flows through the middle of a desert:
The river, which on first glance appears to be a flow of sand, was caused by extreme weather in Iraq that saw the country blanketed in heavy rain and hail. The ice then overflowed rivers and ended up creating this incredible phenomenon. It’s also the second freak weather event to hit Iraq since its last summer, when the country experienced an extraordinary heat wave where temperatures hit 52°C in Baghdad. Whilst things like this are interesting they’re a symptom of a much larger issue, one that we all need to work together to solve.
The possibilities that emerge from a true quantum computer are to computing what fusion is to energy generation. It’s a field of active research, one in which many scientists have spent their lives, yet the promised land still seems to elude us. Just like fusion though quantum computing has seen several advancements in recent years, enough to show that it is achievable without giving us a concrete idea of when it will become commonplace. The current darling of the quantum computing world is D-Wave, the company that announced they had created functioning qubits many years ago and set about commercializing them. However they were unable to show substantial gains over simulations on classical computers for numerous problems, calling into question whether or not they’d actually created what they claimed to. Today however brings us results that demonstrate quantum speedup, on the order of 108 times faster than regular computers.
For a bit of background the D-Wave 2X (the device pictures above and the one which showed quantum speedup) can’t really be called a quantum computer, even though D-Wave calls it that. Instead it’s what you’d call a quantum annealer, a specific kind of computing device that’s designed to solve very specific kinds of problems. This means that it’s not a Turing complete device, unable to tackle the wide range of computing tasks which we’d typically expect a computer to be capable of. The kinds of problems it can solve however are optimizations, like finding local maximums/minimums for a given equation with lots of variables. This is still quite useful however which is why many large companies, including Google, have purchased one of these devices.
In order to judge whether or not the D-Wave 2X was actually doing computations using qubits (and not just some fancy tricks with regular processors) it was pitted against a classical computer doing the same function, called simulated annealing. Essentially this means that the D-Wave was running against a simulated version of itself, a relatively easy challenge for a quantum annealer to beat. However identifying the problem space in which the D-Wave 2X showed quantum speedup proved tricky, sometimes running at about the same speed or showing only a mild (comparative to expectations) speedup. This brought into question whether or not the qubits that D-Wave had created were actually functioning like they said they were. The research continued however and has just recently born fruit.
The research, published on ArXiv (not yet peer reviewed), shows that the D-Wave 2X is about 100 million times faster than its simulated counterpart. Additionally for another algorithm, quantum monte carlo, a similar amount of speedup was observed. This is the kind of speedup that the researchers have been looking for and it demonstrates that D-Wave is indeed a quantum device. This research points towards simulated annealing being the best measure with which to judge quantum systems like the D-Wave 2X against, something which will help immensely with future research.
There’s still a long way to go until we have a general purpose quantum computer however research like this is incredibly promising. The team at Google which has been testing this device has come up with numerous improvements they want to make to it and developed systems to make it easier for others to exploit such quantum systems. It’s this kind of fundamental research which will be key to the generalization of this technology and, hopefully, it’s inevitable commercialization. I’m very much looking forward to seeing what the next generation of these systems bring and hope their results are just as encouraging.