Fiber is the future of all communications, that’s a fact that any technologist will be able to tell you. Whilst copper is still the mainstay for the majority its lifetime is limited as optics are fast approaching the point where they’re feasible for everything. However even fiber has its limits, one that some feel we were going to hit sooner rather than later which could cause severe issues for the Internet’s future. However new research coming out of the University of California, San Diego paves the way for boosting our fiber network’s bandwidth significantly.
Today’s fiber networks are made up of long runs of fiber optic cable interspersed with things called repeaters or regenerators. Essentially these devices are responsible for boosting up the optical signal which becomes degraded as it travels down the fiber. The problem with these devices is that they’re expensive, add in latency and are power hungry devices, attributes that aren’t exactly desirable. These problems are born out of a physical limitation of fiber networks which puts an upper limit on the amount of power you can send down an optical cable. Past a certain point the more power you put down a fiber the more interference you generate meaning there’s only so much you can pump into a cable before you’re doing more harm than good. The new research however proposes a novel way to deal with this: interfere with the signal before it’s sent.
The problem with interference that’s generated by increasing the power of the signal is that it’s unpredictable meaning there’s really no good way to combat it. The researchers however figured out a way of conditioning the signal before it’s transmitted which allows the interference to become predictable. Then at the receiving end they’ve used what they’re calling “frequency combs” to reverse the interference on the other end, pulling a useful signal out of interference. In the lab tests they were able to send the signal over 12,000KM without the use of a repeater, an absolutely astonishing distance. Using such technology could drastically improve the efficiency of our current dark fiber networks which would go a long way to avoiding the bandwidth crunch.
It will be a little while off before this technology makes its way into widespread use as whilst it shows a lot of promise the application within the lab falls short of a practical implementation. Current optical fibers carry around 32 different signals whereas the system that the researchers developed can currently only handle 5. Ramping up the number of channels they can support is a non-trivial task but at least it’s engineering challenge and not a theoretical one.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
PC ports of mobile games have mostly been of low quality. Whilst many of the games make use of a base engine that’s portable between platforms often those who are doing the porting are the ones who developed the original game and the paradigms they learnt developing for a mobile platform don’t translate across. There are exceptions to this, of course, however it’s been the main reason why I’ve steered clear of many ported titles. The Silent Age however has received wide and varied praise, even after it recently made the transition to the PC and so my interest was piqued. Whilst the game might not be winning any awards in the graphics or game play department it did manage to provide one of the better story experiences I’ve had with games of this nature.
You’re Joe, the lowly janitor of the giant research and development corporation Archon. For the most part your life is pretty mundane except for the wild and wonderful things that your partner in crime, fellow janitor Frank, tells you about. One day however you’re called up to management and, lucky for you, it’s good news! You’re getting promoted, taking over all of Frank’s responsibilities because you’ve shown such dedication to your job (with no pay increase, of course, you understand). When you go down to inspect the place where you’ll be doing your new duties however you notice something strange, a trail of blood leading into one of the restricted areas. Following that trail starts you on a long journey that will eventually end with you saving the world.
The Silent Age comes to us care of the Unity platform however you’d be forgiven for thinking that it was an old school flash game that had been revamped for the mobile and PC platforms. It shares a similar aesthetic to many of the games from the era when Flash reigned supreme with simple colours, soft gradients and very simple animations. On a mobile screen I’m sure it looks plenty good although on my 24″ monitors the simple style does lose a little bit of its lustre. Still it’s not a bad looking game by any stretch of the imagination but you can tell which platform it was designed for primarily.
Mechanically The Silent Age plays just like any other indie adventure game with your usual cavalcade of puzzles that consist of wildly clicking on everything and trying every item in your inventory to see if something works. The puzzles are really just short breaks between the longer dialogue sections which, interestingly enough, are all fully voiced. There’s a small extra dimension added by the time travel device, allowing you to travel to the past or future at will, but it’s nothing like the mind bending time manipulation made famous by some other indie titles. Other than that there’s really not much more to The Silent Age something which I ended up appreciating as it meant there wasn’t a bunch of other mechanics thrown in needlessly. It’s pretty much the most basic form of an adventure game I’ve played in a while and that simplicity was incredibly refreshing.
The puzzles are pretty logical with all of them having pretty obvious solutions. There’s no real difficulty curve to speak of as pretty much all of them felt about on par with each other, although there were a few puzzles that managed to stump me completely. Usually this was a result of me missing something or not recognizing a particular visual clue (a good example being the pile of wood in the tunnel under the hospital, it just looked like background to me) so that’s not something I’d fault the developer for. Some of the puzzles were a little ludicrous, requiring a little knowledge about how some things could potentially interact, but at least most of them wouldn’t take more than ten minutes or so of blind clicking to get past. Overall it wasn’t exactly a challenging experience which I felt was by design.
The PC port was a smooth one as pretty much everything in the game worked as expected. The 2D nature helps a lot in this regard as there’s a pretty good translation between tapping on the screen and using a mouse cursor but I’ve seen lesser developers even manage to ruin that. There was one particular problem which caught me out several times however which was that my mouse, if it strayed outside the bounds of the main window, would not be captured. So every so often I’d end up clicking on my web browser or whatever else I had open on my second monitor at the time, closing the game down. A minor complaint, to be sure, but one that’s easily fixed.
The story of The Silent Age is one of the better examples I’ve come across recently, especially for a mobile title. Whilst it’s not exactly the most gripping or emotionally charged story I’ve played of late it does a good job of setting everything up and staying true to itself internally. Of course whenever you introduce time travel into a story things start to get a little weird depending on what model of causality and paradox resolution you ascribe to and The Silent Age is no exception to this. However they manage to stay true to the rules they set up which is more than most high budget films are capable of. Overall I’d say it was satisfying even if it wasn’t the most engaging story.
The Silent Age is a succinct story told through the medium of video games, one that manages to avoid many of the pitfalls that have befallen its fellow mobile to PC port brethren. The art style is simple and clean, reminiscent of Flash games of ages gone by. The puzzle mechanics are straightforward, ensuring that no one will be stuck for hours trying every single item in their inventory to progress to the next level. The story, whilst above average for its peers, lacks a few key elements that would elevate it to a gripping, must-play tale. Overall The Silent Age was a solid experience, even if it wasn’t ground breaking.
The Silent Age is available on PC, Android and iOS right now for $9.99, $6.50 and $6.50 respectively. Game was played on the PC with approximately 2 hours of total play time and 71% of the achievements unlocked.
Despite all the evidence to the contrary rights holders are able to convince governments around the world that piracy is a problem best faced with legislation rather than outright competition. It’s been shown time and time again that access to a reasonably priced legitimate service results in drastic reductions in the rates of piracy and, funnily enough, increased revenue for the businesses that adopt this new strategy. Australia had been somewhat immune to the rights lobby’s ploys for a while, with several high court rulings not finding in their favour. However our current government (and, unfortunately, the opposition) seems more than happy to bend to the whims of this group with their most recent bow coming in the form of a website blocking bill.
The bill itself clocks in at a mere 9 pages with the explanatory notes not going much further. Simply put it provides a legislative avenue for rights holders to compel ISPs to block access to sites that hold infringing material through the use of a court injunction. How that blocking should be done isn’t mentioned at all, nor is there any mention of recourse activities that a site can undertake to have themselves unblocked should they find themselves a target of an injunction. Probably the only diamond in this pile of horseshit of legislation is the protection that ISPs get from costs born out of this process, but only if they choose not to fight any injunction that may be placed upon them. However all of that is moot when compared to the real issue at hand here.
It’s just not going to fucking work.
As I wrote last year when Brandis and co were soliciting ideas for this exact legislation no matter what kind of blocking the ISPs employ (which, let’s be honest here, will be the lowest and most painless form of blocking they can get away with) it will be circumvented instantly by anyone and everyone. The Australian government isn’t the first government to engage in wholesale blocking of sites and so solutions to get around them are plentiful, many of them completely free to access. Hell with a very healthy amount of VPN usage in Australia already most people already have a method by which to cut the ISPs completely out of the picture, rendering any action they take completely moot.
The big problem that I, and many others, have with legislation like this is that it sets a bad precedent that could be used to justify further site blocking policies down the line. It doesn’t take much effort to take this bill, rework it to target other objectionable content and then have that pushed through parliament. Sure, we can hope that the process means that such policies won’t make it through due to the obvious chilling effects that it might have, however this legislation faced no opposition from either of the major parties so it follows that future ones could see just as slim opposition. Worst still there’s almost no chance that it will ever be repealed as no government ever wants to give up power it’s granted itself.
In the end this is just another piece of evidence to show that our current government has a fundamental lack of understanding of technology and its implications. The bill is worthless, a bit of pandering to the rights lobbyists who will wield it with reckless abandon which will fail it achieve its goals from day one. Already there are numerous sites telling users how to circumvent it and there is absolutely no amount of legislation that can be passed to stop them. All we can hope for now is that this doesn’t prove to be the first step on a slippery slope towards larger scale censorship as the Great Firewall of Australia begins to smoulder.
I had grand ideas that my current PC build would be all solid state. Sure the cost would’ve been high, on the order of $1500 to get about 2TB in RAID10, but the performance potential was hard to deny. In the end however I opted for good old fashioned spinning rust mostly because current RAID controllers don’t do TRIM on SSDs, meaning I would likely be in for a lovely performance downgrade in the not too distant future. Despite that I was keenly aware of just how feasible it was to go full SSD for all my PC storage and how the days of the traditional hard drive are likely to be numbered.
Ever since their first commercial introduction all those years ago SSDs have been rapidly plummeting in price with the most recent drop coming off the back of a few key technological innovations. Whilst they’re still an order of magnitude away from traditional HDDs in terms of cost per gigabyte ($0.50/GB for SSD, $0.05/GB for HDD) the gap in performance between the two is more than enough to justify the current price differential. For laptops and other portable devices that don’t require large amounts of onboard storage SSDs have already become the sole storage platform in many cases however they still lose out for large scale data storage. That differential could come to a quick close however, although I don’t think SSDs’ rise to fame will be instantaneous past that point.
One thing that has always plagued SSDs is the question around their durability and longevity as the flash cells upon which they rely have a defined life in terms of read and write cycles. Whilst SSDs have, for the most part, proven reliable even when deployed at scale the fact is that they’ve really only had about 5 or so years of production level use to back them up. Compare that to hard drives which have track records stretching back decades and you can see why many enterprises are still tentative about replacing their fleet en-masse; We just don’t know how the various components that make up a SSD will stand the test of time.
However concerns like that are likely to take a back seat if things like a 30TB drive by 2018 come to fruition. Increasing capacity on traditional hard drives has always proven to be a difficult affair as there’s only so many platters you can fit in the standard space. Whilst we’re starting to see a trickle of 10TB drives into the enterprise market they’re likely not going to be available at a cost effective point for consumers anytime soon and that gives a lot of leeway to SSDs to play catchup to their traditional brethren. That means cost parity could come much sooner than many anticipated, and that’s the point where the decision about your storage medium is already made for the consumer.
We likely won’t see spinning rust disappear for the better part of a decade but the next couple years are going to see something of a paradigm shift in terms of which platform is considered before another. SSDs already reign supreme as the drive to have your operating system residing on, all they require now is a comparative cost per gigabyte to graduate beyond that. Once we reach that point it’s likely to be an inflection point in terms of the way we store our data and, for consumers like us, a great time to upgrade our storage.
For as long as I’ve been writing this blog E3 hasn’t been much more than a distraction when it rolls around. Indeed in the 7 years I’ve been writing about games I’ve only ever covered it twice and usually only in passing, picking out a couple things that piqued my interest at the time. The reasons behind this would be obvious to any gamer as E3 has been largely irrelevant to the gaming community since about 2007 with most of the big announcements coming out of other conventions like PAX. However this year something seemed to change as the both the gaming industry and community seemed to rally behind this years expo, making it one of the most talked about to date.
The reason behind E3’s quick fall into obscurity was fuelled by the extremely questionable decision back in 2007 to close off the event to the general public and instead only allow games industry representatives and journalists. The first year after this was done saw the attendance drop to a mere 10,000 (down from 60,000 the year previous) and the following year saw it drop by half again. The other conventions that popped up in E3’s absence soaked up all these attendees and, by consequence, all of the attention of the games industry and press. Thus E3 spent the last 5 years attempting to rebuild its relevance but struggled to find a foothold with such stiff competition.
This year however has proven to be E3’s one of its greatest on record with attendance above 50,000 for the first time since they made that awful decision all those years ago. This rise in attendance has also come hand in hand with a much greater industry presence, boasting a much greater presence from major game developers and publishers. There were also numerous major announcements from pretty much all of the large players in the console and PC markets, something we really hadn’t seen at a single event for some time. For someone who’s been extremely jaded about E3 for so long it honestly took me by surprise just how relevant E3 had become and what that might mean for the conference’s future.
The challenge that E3 now faces is building on the momentum that they’ve created this year in order to re-cement their position as top dog of the games conferences. In it absence many of the larger players in the games industry opted to either patronize other conferences or set up their own, many of which have now gone on to be quite profitable events (like BlizzCon, for example). E3 will likely never be able to replace them however given the resounding success of this year’s conference there is potential for them to start drawing business away from some of the other conferences.
In the end though more competition in this space will hopefully lead to better things for the wider gaming community. It will be interesting to see if E3 can repeat their success next year and what the other conventions will be doing in response.
Outside of earth Europa is probably the best place for life as we know it to develop. Beneath the radiation soaked exterior, which consists of an ice layer that could be up to 20KM thick, lies a vast ocean that stretches deep into Europa’s interior. This internal ocean, though bereft of any light, could very well harbor the right conditions to support the development of complex life. However if we’re ever going to entertain the idea of exploring the depths of that vast and dark place we’ll first need a lot more data on Europa itself. Last week NASA has greenlit the Europa Clipper mission which will do just that, slated for some time in the 2020 decade.
Exploration of Europa has been relatively sparse, with the most recent mission being the New Horizons probe which imaged Europa on its Jupiter flyby on its path to Pluto. Indeed the majority of missions that have imaged Europa have been flybys with the only long duration mission being the Galileo probe that was in orbit around Jupiter for 8 years which included numerous flybys of Europa. The Europa Clipper mission would be quite similar in nature with the craft conducting multiple flybys rather than staying in orbit. The mission would include the multiple year journey to our jovian brother and no less than 45 flybys of Europa once it arrived.
It might seem odd that an observation mission would opt to do numerous flybys rather than a continuous orbit however there are multiple reasons for this. For starters Jupiter has a powerful radiation belt that stretches some 700,000KM out from the planet, enveloping Europa. This means that any craft that dares enter Jupiter’s orbit its lifetime is usually somewhat limited and should NASA have opted for an orbital mission rather than a flyby one the craft’s expected lifetime wouldn’t be much more than a month or so. Strictly speaking this might not be too much of an issue as you can make a lot of observations in a month however the real challenge comes from getting that data back down to Earth.
Deep space robotic probes are often capable of capturing a lot more information than they’re able to send back in real time, leading to them storing a lot of information locally and transmitting it back over a longer period of time. If the Europa clipper was orbital this would mean it would only have 30 days with which to send back information, not nearly enough for the volumes of data that modern probes can generate. The flybys though give the probe more than enough time to dump all of its data back down to Earth whilst it’s coasting outside of Jupiter’s harsh radiation belts, ensuring that all data gathered is returned safely.
Hopefully the data that this craft brings back will pave the way for a potential mission to the surface sometime in the future. Europa has so much potential for harboring life that we simply must investigate it and the data gleaned from the Europa Clipper mission will provide the basis for a future landing mission. Of course such a mission is likely decades away however I, and many others, believe that a mission to poke beneath the surface of Europa is the best chance we have of finding alien life. Even if we don’t that will provide valuable insight into the conditions for forming life and will help point our future searches.
Many games have sought to catch some of Telltale’s success by emulating their trademark brand of story-first games. For the most part this comes in the form of copying the core mechanics, usually with regards to the dialogue choices and the quick time based action sequences. Few however have attempted to emulate the cel-shaded comic book style as most of them have their own art direction that they want to pursue. D4: Dark Dreams Don’t Die appears to be an almost blow for blow recreation of the Telltale style, down to the art direction, however the similarities really are only skin deep. Whilst I admit I decided to play this to lambaste it for its almost shameless imitation the actual experience was something I didn’t expect, a rare occurrence for this humble writer.
You are David Young, former detective with the Boston Police Department and recent widower to his beloved wife; Little Peggy. The tragic incident that took his wife away left him with an amazing gift, the ability to travel back in time to see the past as it happened. He can’t do this at will though, only through the use of objects that hold some significance to the past, but those mementos are what he needs to achieve his real goal: to find “D”. Before she died Little Peggy told David to look for D and so David left the BPD to pursue this elusive character in the hopes he can unravel the mysteries behind her murder.
As I alluded to earlier D4 emulates the Telltale style of games by using cel-shading to make everything look like a cartoon. Like most games that make use of this stylization it works well for the most part however every so often the 3D world just doesn’t interact well with with it, leading to some rather weird moments. Probably the biggest stand out of this is the incessant bubble gum blowing that the main character does which just looks silly, especially when his lips don’t move the whole time he does it. It also doesn’t look too great up close, something which becomes painfully apparent when the game zooms up on a character’s face for whatever reason. Overall though the visual quality feels above average, even if I include the venerable Telltale games in the mix.
Like nearly all games of a similar style D4 is an adventure/puzzler, putting you in various cordoned off rooms with dozens of objects to interact with to solve the current objective in order to progress to the next section. It may not seem like a lot at first however once you get to the end, which displays your completion level, it becomes clear that there really is quite a lot hiding in every room of D4. These additional objects are usually things that will flesh out the backstory of the various characters in D4 whilst some will unlock non-gameplay impacting collectibles like new clothes for the characters. There’s also a quick time event based combat system which gets engaged during high tension moments, something which most gamers lament but actually felt relatively well implemented. Finally there’s skerricks of a RPG style progression system in the game in the form of stamina (used when you interact with objects), life (lost when you fail a quicktime event) and vision (used to identify things you should interact with) all of which can be improved with the right clothing or finding a certain collectible. This all adds up to a game which, if you so wish it, has quite a lot of replayability about it or can simply be played from start to finish for the story without a care for the rest of it.
The puzzles are pretty straight forward since there’s no inventory to speak of, meaning that they can all be solved by simply clicking on enough things and stumbling through the right dialogue options. If you’re paying attention you can skip quite a lot of the fluff however doing so can rob you of important pieces of backstory that help to flesh out your character’s motivations and those of others around him. For the most part though if you take the typical “click on all the things” approach that most of these kinds of games encourage then you’re likely to stumble across all the pertinent plot points without too much worry. Even if you miss them you can go back and replay the episode again which won’t take long if you know exactly which buttons to press.
Mechanically D4 plays well for the most part however the quick time detection seems a little off at some points as the achieved “sync rate” can be a little random. I’ve had times when I completely fumbled it and got 100% whilst other times I’ve done it perfectly (or so I thought) and gotten 50%. This mostly happened on the diagonal ones so I figure there’s something a little wrong in the detection algorithm for that particular quick time event. There’s also almost no way to tell how to “stay in character” with the dialogue options in order to get 100% sync as most of them seem in line with what David would say, just some are more or less dickish than others. There might be some kind of hint or mechanic that I didn’t fully understand that makes this a lot clearer but unfortunately for me I just didn’t figure it out.
D4’s story starts out incredibly weak as it has a really confusing blend of elements (supernatural powers, a detective with amnesia, a person who acts like a cat for some inexplicable reason) that don’t seem to gel well together. However over the course of the first 2 episodes that come with the initial game most of them start to make sense and the story really starts to pick up as you uncover more clues to the events that happened prior to the game. Like most episodic games it feels unfair to judge the game based on just a fraction of the whole story but D4 at least has one of the stronger foundations on which to build upon so it will be interesting to see where the developers take it from here.
It would be so easy to write off D4: Dark Dreams Don’t Die as a simple Telltale clone however the game comes into its own over the course of the first two episodes. Sure it may not be the graphical marvel that many other games might be, nor is its quick time event system completely satisfactory, but it does provide a rather enjoyable experience. Whilst the director doesn’t know how many episodes the story might have suffice to say there’s easily enough build up for at least a full season and hopefully those episodes are forthcoming sooner rather than later. If you’re a fan of the Telltale style of games then you won’t be disappointed with D4: Dark Dreams Don’t Die.
D4 : Dark Dreams Don’t Die is available on PC and XboxOne right now for $14.99 and $19.95 respectively. Game was played on the PC with a total playtime of 3 hours with 42% of the achievements unlocked.
If you’re looking to watch people play games live there’s really only one place to look: Twitch. It started out its life as the bastard stepchild of Justin.tv, a streaming platform for all things, however it quickly outgrew its parent and at the start of last year the company dumped the original product and dedicated itself wholly to Twitch. Various other streaming apps have popped up in its place since then but none have been able to hold a candle to Twitch’s dominant position in the game streaming market. The one platform that could however has just announced YouTube Gaming which has the potential to be the first real competitor to Twitch in a very long time.
Whilst the product isn’t generally available yet, slated to come out sometime soon, it has already made its way into the hands of many journalists who’ve taken it for a spin. The general sentiment seems to be that YouTube has essentially copied the fundamental aspects of Twitch’s streaming service, mostly in regard to the layout and features, whilst adding in a couple of additional things which serve as bait to attract both streamers and consumers to the platform. Probably the most interesting aspects of YouTube’s platform are the things that are missing from it, namely the subscription payment system, alongside the dreaded ContentID system which will be in full force on all streams.
The main thing that will draw people to YouTube’s streaming service however is most likely the huge infrastructure that YouTube is able to draw on. YouTube has already demonstrated that it can handle the enormous amounts of traffic that live streaming can generate as they currently hold the world record for most number of streams at 8 million for the Felix Baumgartner jump back in 2012. Twitch, despite its popularity, has experienced numerous growing pains when it has attempted to scale up its infrastructure outside of the US and many have pined for a much better service. YouTube, with the Google backbone at its disposal, has the potential to deliver that however I’m not sure if that will be enough to grab a significant share of this market.
Twitch has, for better or for worse, developed a kind of culture around streaming games and has thus set a lot of expectations for what they’d want in a competing streaming product. YouTube Gaming gets most of the way there with the current incarnation of the product however the absence of a few things, like an IRC backend for chat and the paid subscriptions, could end up being the killer features that keep people away from their platform. The former is easy enough to fix, either by adopting IRC directly or simply providing better tools for managing the chat stream, however the latter isn’t likely to change anytime soon. Sure, YouTube has their one off payment system but that runs against the current community norms and thus will likely not see as much use. That then feeds into a monetization problem for streamers which is likely to deter many from adopting the platform.
All that being said however it’s good to see some competition coming to this space as it should hopefully mean more fierce innovation from both parties as they vie for more marketshare. YouTube Gaming has a massive uphill battle ahead of it if however if anyone has the capability to fight Twitch on their own ground it’s them. The next 6 months will be telling as it will show just how many are willing to convert away from the Twitch platform and whether or not it will become a sustainable product for YouTube long term.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.