Ever since I can remember my joints have always been prone to popping and cracking. It was the worst when I was a child as I couldn’t really sneak around anywhere without my ankles loudly announcing my presence, thwarting my attempt at whatever shenanigans I was up to. Soon after I discovered the joy of cracking my knuckles and most other joints in my body, much to the chagrin of those around me. However even though I was warned of health effects (which I’m pretty sure is bunk) I never looked up the actual mechanism behind the signature sound and honestly it’s actually quite interesting:
Interestingly though whilst cavitation in the synovial fluid is one of the better explanations for where the sound originates there’s still some other mechanisms which can cause similar audible effects. Rapid stretching of ligaments can also result in similar noises, usually due to tendons snapping from one position to another. Some sounds are also the result of less benign activities like tearing of intra-articular adhesions tearing, although that usually goes hand in hand with a not-so-minor injury to the joint.
There’s also been a little more investigation into the health effects of cracking your knuckles than what the video alludes to. A recent study of 215 patients in the age range of 50 to 89 showed that, regardless of how long a person had been cracking their knuckles, there was no relationship between cracking and osteoarthritis in those joints. Now this was a retrospective study (in terms of people telling the researchers of how much they cracked their knuckles) so there’s potential for biases to slip in there but they did use radiographs to determine if they had arthritis or not. There’s no studies around other joints however, although I’d wager that the mechanisms, and thus their effects, are very similar throughout the body.
And now if you’ll excuse me I’ll be off to disgust my wife by cracking every joint in my body 🙂
There’s no question that Apple was the primary force behind the Bring Your Own Device (BYOD) movement. It didn’t take long for every executive to find themselves with an iPad in their hands, wondering why they had to use their god damn Blackberry when the email experience on their new tablet was so much better. Unfortunately, as is the case with most Apple products, the enterprise integration was severely lacking and the experience suffered as a result. Today the experience is much better although that’s mostly the result of third party vendors developing solutions, not so much Apple developing the capability themselves. It seems that after decades of neglecting the enterprise Apple is finally ready to make a proper attempt at it, although in the most ass backwards way possible.
Today Apple announced that it would be partnering with IBM in order to grow their mobility offerings starting with a focus on applications, cloud services and device supply and support. IBM is going to start off by developing 100 “industry specific” enterprise solutions, essentially native applications for the iPhone and iPad that are tailored for specific business needs. They’ll also be growing their cloud offering with services that are optimized for iOS with a focus on all the buzzwords that surround the BYOD movement (security, management, analytics and integration). You’ll also be able to source iOS devices from IBM with warranty backing by Cupertino, enabling IBM to really be your one stop shop for all things Apple related in the enterprise.
At a high level this would sound like an amazing thing for anyone who’s looking to integrate Apple products into their environment. You could engage IBM’s large professional services team to do much of the leg work for you, freeing you from worrying about the numerous issues that come from enabling a BYOD environment. The tailored applications would also seem to solve a big pain point for a lot of users as the only option most enterprises have available to them today is to build their own, a significantly costly endeavour. Plus if you’re already buying IBM equipment their supply chain will already be well known to you and your financiers, lowering the barrier to entry significantly.
Really it does sound amazing, except for the fact that this partnership is about 5 years late.
Ever since everyone wanted their work email on an iPhone there’s been vendors working on solutions to integrate non-standard hardware into the enterprise environment. The initial solutions were, frankly, more trouble than they were worth but today there are a myriad of applications available for pretty much every use case you can think of. Indeed pretty much every single thing that this partnership hopes to achieve is already possible today, not at some undetermined time in the future.
This is not to mention that IBM is also the last name you’d think of when it comes to cloud services, especially when you consider how much business they’ve lost as of late. The acquisition of SoftLayer won’t help them much in this regard as they’re building up an entirely new capability from scratch which, by definition, means that they’re offering will be behind everything else that’s currently available. They might have the supply chains and capital to be able to ramp up to public cloud levels of scalability but they’re doing it several years after everyone else has, in a problem space that is pretty much completely solved.
The only place I can see this partnership paying dividends is in places which have yet to adopt any kind of BYOD or mobility solution which, honestly, is few and far between these days. This isn’t an emerging market that IBM is getting in on the ground floor on, it’s a half decade old issue that’s had solutions from numerous vendors for some time now. Any large organisation, which has been IBM’s bread and butter since time immemorial, will already have solutions in place for this. Transitioning them away from that is going to be costly and I doubt IBM will be able to provide the requisite savings to make it attractive. Smaller organisations likely don’t need the level of management that IBM is looking to provide and probably don’t have a working relationship with Big Blue anyway.
Honestly I can’t see this working out at all for IBM and it does nothing to improve Apple’s presence in the enterprise space. The problem space is already well defined with solid solutions available from multiple vendors, many of which have already have numerous years of use in the field. The old adage of never getting fired for buying IBM has long been irrelevant and this latest foray into a field where their experience is questionable will do nothing to bring it back. If they do manage to make anything of this I will be really surprised as entering a market this late in the piece rarely works out well, even if you have mountains of capital to throw at it.
My fellow IT workers will likely be familiar with the non-standard hours our work can require us to keep. Since we’re an essential service any interruption means that other people are unable to work so we’re often left with no choice to continue working long after everyone has left. Thankfully I moved out of doing that routinely long ago however I’ve still had my fair share of long weeks, weekend work and the occasional all-nighter in order to make sure a job was done properly. I’ll never work more hours simply for the sake of it though as I know my productivity rapidly drops off after a certain point, meaning the extra hours aren’t particularly effective. Still though there seems to be something of a worship culture around those who work long hours, even if the results of doing so are questionable.
My stance has always been that everyone should be able to complete their work in the standard number of work week hours and if goals aren’t being met it’s a fault of resourcing, not the amount of effort being put in. Too often though I’ve seen people take it upon themselves to make up for these shortcomings by working longer hours which feeds into a terrible cycle from which most projects can’t recover. It often starts with individuals accommodating bursts of work which falsely set the expectation that such peaks can be routinely accommodated. Sure it’s only a couple extra hours here or there but when each member of a team of 20 does that you’re already a resource behind and it doesn’t take much to quickly escalate from there.
The problem, I feel, stems from the association that hours worked is equal to the amount of contribution. In all cases this is simply not true as many studies have shown that, even with routine tasks with readily quantifiable output, your efficiency degrades over time. Indeed my highly unscientific observations, coupled with a little bit of online research, shows that working past the 8 hour mark per day will likely lead to heavy declines in productivity over time. I’ve certainly noticed that among people I’ve worked alongside during 12+ hour days as the pace of work rapidly declines and complex issues take far longer to solve than they would have at the beginning of the day.
Thus the solution is two fold: we need to stop idolizing people who put in “long hours” and be steadfast when it comes to taking on additional work. Stopping the idolization means that those who choose to work longer hours, for whatever reasons, are no longer used as a standard by which everyone else is judged. It doesn’t do anyone any good to hold everyone to standards like that and will likely lead to high levels of burnout and turnover. Putting constraints around additional work means that no one should have to work more than they need to and should highlight resourcing issues long before it becomes a problem that can’t be handled.
I’m fortunate to work for a company that values results over time invested and it’s been showing in the results that our people have been able to deliver. As someone who’d worked in organisations where the culture valued hours and the appearance being busy over everything else it’s been extremely refreshing, validating my long held beliefs about work efficiency and productivity. Working alongside other agencies that don’t have this culture has provided a stark reminder of just how idiotic the idolization of overtime is and why I’ll likely be sticking around this place for a while to come.
There’s 2 main reasons why I’ve avoided writing about the NBN for the last couple months. For the most part it’s been because there’s really been nothing of note to report and sifting through hours of senate talks to find a nugget of new information to write about isn’t really something I’m particularly enthused about doing. Secondly as someone who’s deeply interested in technology (and makes his living out of services that could make heavy use of the NBN) the current state of the project is, frankly, infuriating and I don’t think people enjoy reading about how angry I am. Still it seems that the Liberal’s MTM NBN plan has turned from a hypothetical farce into a factual one and I’m not one to pass up an opportunity to lay down criticism where criticism is due.
The slogan the Liberal’s ran with during their election campaign was “Fast. Affordable, Sooner.” promising that they’d be able to deliver at least 25Mbps to every Australian by the end of 2016 with that ramping up to 50Mbps by the end of 2019. This ended up being called the Multi-Technology Mix (MTM) NBN which would now include the HFC rather than overbuilding them and would switch to FTTN technology rather than FTTP. The issues with this plan were vast and numerous (ones I’ve covered in great detail in the past) and suffice to say the technology community in Australia didn’t buy into the ideas one bit. Indeed as time as progressed the core promises of the plan have dropped off one by one with NBNCo now proceeding with the MTM solution despite a cost-benefit analysis not being completed and the speed guarantee is now gone completely. If that wasn’t enough it’s come to my attention that even though they’ve gone ahead with the solution NBNCo hasn’t been able to connect a single customer to the FTTN solution.
It seems the Liberal’s promises simply don’t stand up to reality, fancy that.
The issues they seem to be encountering with deploying their FTTN trial are what many of the more vocal critics had been harping on for a long time, primarily the power and maintenance requirements that FTTN cabinets would require. Their Epping trial has faced several months of delays because they weren’t able to source adequate power, a problem which currently doesn’t have a timeline for a solution yet. The FTTP NBN which was using Gigabit Passive Optical Network (GPON) technology does not suffer from this kind of issue at all and this was showing in the ramp up in deployment numbers that NBNCo was seeing before it stopped its FTTP rollouts. If just the trial of the MTM solution is having this many issues then it follows that the full rollout will fare no better and that puts an axe to the Liberal’s election promises.
We’re rapidly approaching the end of this year which means that the timeline the Liberals laid out is starting to look less and less feasible. Even if the trial site gets everyone on board before the end of this year that still gives only 2 years for the rest of the infrastructure to be rolled out. The FTTP NBN wasn’t even approaching those numbers so there’s no way in hell that the MTM solution would be able to accomplish that, even with their little cheat of using the HFC networks.
So there goes the idea of us getting the NBN sooner but do any of their other promises hold true?
Well the speed guarantee went away some time ago so even the Liberals admit that their solution won’t be fast so the only thing they might be able to argue is that they can do it cheaper. Unfortunately for Turnbull his assumption that Telstra would just hand over the copper free of charge something which Telstra had no interest in doing. Indeed as part of the renegotiation of the contract with Telstra NBNCo will be paying some $150 million for access to 200,000 premises worth of copper which, if extrapolated to all of Australia, would be around $5.8 billion. This does not include the cabinets or remediating any copper that can’t handle FTTN speeds which will quickly eat into any savings on the deal. That’s not going into the ongoing costs these cabinets will incur during their lifetimes which is an order of magnitude more than what a GPON network would.
I know I’m not really treading any new ground by writing all this but the MTM NBN is beyond a joke now; a failed election promise that’s done nothing to help the Liberal’s waning credibility and will only do damage to Australia’s technology sector. Even if they do get voted out come next election it’ll be years before the damage can be undone which is a royal shame as the NBN was one of the best bits of policy to come out of the tumultuous time that was Labor’s last 2 terms in office. Maybe one day I’ll be able to look back on all my rants on this topic and laugh about it but until that day comes I’ll just be yet another angry IT sector worker, forever cursing the government that took away my fibre filled dream.
A game based around any of the world wars is usually an instant turn off for me. The number of games that have been based around those events are so numerous that there really doesn’t feel like there can be any more angles to tackle it from as pretty much every story from it has been done to death. The alternate reality and fantasy versions of it, like those in Wolfenstein, get away with it since they’re not wholly dependent on war stories for inspiration but they’ll still need a little something extra to pique my interest. Valiant Hearts, which comes to us care of Ubisoft Montpellier, has been receiving wide praise for it’s touching story. As someone who’s just come off 2 rather lacklustre story based titles I wasn’t hoping for miracles but Valiant Hearts managed to surprise me, bringing this writer to tears as its conclusion.
The year is 1914 and the assassination of Archduke Franz Ferdinand caused Germany to declare war on Russia. France, anticipating that this war will escalate far beyond those two countries, deports all of its German citizens back to their home country. Karl is one of those citizens, torn away from his wife and young son he is sent to the frontlines of the war to fight for his home country. Not long after his wife’s father, Emile, is called to duty as well and sent to fight for the French army. What follows is a tale of how the war drives families apart and the never ending quest for them to be reunited once again.
Valiant Hearts reminds me of the flash games of yesteryear, albeit with production values far exceeding that of any of its predecessors. It was developed on the same framework that powered Ubisoft’s recent release Child of Light and it’s easy to see just how heavily the choice of that platform influenced the art work. In contrast to Child of Light however Valiant Heart’s art style is far more dark and monotone with infinite shades of brown and grey being the primary colour palette. This does mean that when colour is used it’s quite striking and the art team does a fantastic job of using it to great effect. This also extends to the beautiful soundtrack that accompanies the game, ebbing and flowing at all the right moments.
In terms of actual game play Valiant Hearts is much like other story-first games in the sense that it usually takes a back seat to progressing the story. For the most part you’ll be doing elaborate fetch quest missions that require you to find one item in order to progress through the next session. Sometimes you’ll have to make your way through various different people in order to get to the final objective and try as you might there’s no clever way to bypass certain things. There’s also a bevy of quicktime-esque events that will require you to either guess correctly or simply memorize the sequence in which events happen in order to move on to the next part of the story.
Thankfully Valiant Hearts didn’t fall into the trap of putting far too much game play in between sections of the story like both the recent titles I played through did. In all honesty I didn’t think it was a major hurdle for games of this nature to get past as many of them are done by indie developers and so ancillary mechanics are usually on the bottom of their to do list. However with 2 games falling prey to the same problems I have to commend Valiant Hearts for getting the pacing right which helps immensely with keeping the player interested in the story. There were some sections that could use some tuning but compared to my recent experiences it was heaven.
Most of the puzzles are fairly intuitive as your inventory space is limited to a single item, limiting the amount of complexity that the game can throw at you significantly. There’s a pretty good variety of puzzle mechanics so you won’t be redoing the same thing over and over again but most of them shouldn’t take you more than 10 minutes or so to figure out. A couple of them will require you to think laterally about what you’re doing as some of them lack obvious cues as to what might interact with what. This did lead to a couple confusing moments when I wasn’t quite sure if I was doing the right thing but most of the time you’ll get there through trial and error.
One issue I did find with Valiant Hearts was that since there’s not a lot of visual differentiation between different parts of the environment it can be sometimes hard to find a path you’re meant to go down or what elements are interactive. This meant that in some of the more visually busy sections I was wondering just where exactly I was meant to go as I couldn’t find the particular path to go down. I also had some deaths that felt like they were more due to visual confusion more than anything else. This might just be a fault of the writer however but it’s still an issue that should be pointed out.
Of course what really makes Valiant Hearts worth playing is the story. Overall it’s a pretty typical story of a family torn apart by war, almost Romeo and Juliet like in the star crossed lovers from different houses idea, and the story of them trying to reunite with each other. The main characters all receive the background and development they deserve, which helps immensely when it comes to scenes that rely on engaging your sense of empathy with them. Some of the elements of it are a little on the fantasy side, which can be a tad distracting from the overall message that the game tries to put forth, however they’re only there as aides to the plot so they’re easily pushed aside.
I’ll have to admit that for probably the first half or so of Valiant Hearts I wasn’t too emotionally invested with the characters or story. Whilst the opening was gripping enough to draw me into playing the game further there’s a bit of dearth in the early game as the characters are seemingly just going through the motions. However as each of their back stories is developed in detail you find yourself becoming attached to them and each tragedy that befalls them starts to cut into you. The final climatic scene is by far one of the most bittersweet endings I have endured in recent memory and whilst it might lean on the cheesy/predictable side that didn’t stop me from bursting into tears, overcome with a sense of grief.
Valiant Hearts is a beautiful story masterfully told through the medium of video games. The art style and music direction are some of the best I’ve experienced in their category, taking the traditional flash styled game and ramping it up to the next level. The game mechanics are simple, enjoyable and thankfully stay out of the way of the story, leaving the player to enjoy Valiant Hearts for what it truly is. Finally the story is by far one of the best examples I’ve come across this year with all the characters receiving the right amount of screen time and development required for it’s ultimate emotional climax. If you, like me, have been feeling let down by the offerings of story based games of late then I can wholeheartedly recommend Valiant Hearts as the cure to what ails you.
Valiant Hearts is available on PC, PlayStation3, PlayStation4, Xbox360 and XboxOne right now for $14.99, $22.95, $22.95, $19.95 and $19.95 respectively. Game was played on the PC with 5 hours of total play time.
Ever since I first saw a 3D printer I wondered how long it’d be before they’d start scaling up in size. Now I’m not talking about incremental size improvements that we see every so often (like with the new Makerbot Z18), no I was wondering when we’d get industrial scale 3D printers that could construct large structures. The steps between your run of the mill desktop 3D printer and something of that magnitude isn’t a simple matter of scaling up the various components as many of the assumptions made at that size simply don’t apply when you get into large scale construction. It seems that day has finally come as Suzhou Yingchuang Science and Trade Development Co has developed a 3D printer capable of creating full size houses:
Details the makeup of the material used, as well as its structural properties, aren’t currently forthcoming however the company behind them claims that it’s about 5 times as hard as traditional building materials. They’re apparently using a few of these 3D printed buildings as offices for some of their employees so you’d figure they’re somewhat habitable although I’m sure they’re in a much more finished state than the ones shown above. Still for a first generation product they seem pretty good and if the company’s claims hold up then they’d become an attractive way to provide low cost housing to a lot of people.
What I’d really be interested to see is how the cost and materials used compares to that of traditional construction. It’s a well known fact that building new housing is an incredibly inefficient process with a lot of materials wasted in during construction. Methods like this provide a great opportunity to reduce the amount of waste generated as there’s no excess material left over once construction has completed. Further refinement of the process could also ensure that post-construction work, like cabling and wiring, are also done in a much more efficient manner.
I’m interested to see how inventive they can get with this as there’s potentially a world of new housing designs out there to exploited using this new method. That will likely be a long time coming however as not everyone will have access to one of these things to fiddle around with but I’m sure just the possibility of a printer of this magnitude has a few people thinking about it already.
I used to be pretty blaise when it came to someone spoiling things for me. Whilst it could be a little irritating to find out the ultimate outcome of something before I had had a chance to experience it for myself I still usually enjoyed it regardless. However my rather voracious appetite for competitive DOTA2 has seen my tolerance of results drop considerably as much of the tension disappears when you know who’s going to win a certain match. In this modern age where everything is broadcast immediately and with reckless abandon avoiding spoilers has become an exercise in frustration. It seems that no matter where you go there’s someone, or something, that will ruin your experience, intentionally or otherwise.I had this exact problem happen to me today when I was browsing around for the wild card games being played for this year’s The International DOTA2 tournament. After not being able to find the games (likely due to me being offline at the time) I saw some updates in my compendium and figured I check them out. Lo and behold there was the winner of the current round of games displayed, thereby informing me of the ultimate outcome. Considering there weren’t that many teams competing in this round it meant pretty much every game had a very predictable outcome and whilst I still enjoyed watching the games afterwards the usual tension was gone and I was far less invested than I’d normally be.
The game client itself isn’t the only source of spoilerific content either, the live streams which many tune into for this content are also quite guilty of spoiling things by casually mentioning results or even displaying future games as part of their highlights section. Since they’re witnessing the games live it’s understandable that they’d forget that not everyone was tuning in at the same time they were, however it’s still something that can really ruin your experience. Watching the entire stream in chronological order can alleviate this somewhat however that also means wading through hours of content in order to see the parts you want.
This isn’t a problem without solutions however they’re often slow in coming forward or completely ineffectual. The DOTA2 client has come some way in that regard, letting you select individual games to watch the replays of, however since the most recent game is always displayed at the top you’re guaranteed to see it unless you cover your screen with your hand (a rather unreliable method). There are some good VOD sites like DOTA2vods which do spoiler free sections although they’re often out of order, meaning you have to figure out the order of matches yourself. Both of these issues can be sorted out with a little more development work and manual intervention where appropriate, something which I’m hoping both Valve and the site owners do sooner rather than later.
The communities are thankfully becoming more aware of just how much spoilers can ruin one’s experience of things like this with the community on Reddit being rather good at policing threads with spoilers in their titles. By far the best example of this is the spoilerfreesc subreddit which I’ve yet to see replicated for other eSports. It’s somewhat understandable that this hasn’t happened due to the rather large amount of overhead incurred but it’s still something that I’m sure many would like to see developed.
Indeed I don’t think these problems are unique to my sport of choice, nor are they new. It’s more that the problem has been exacerbated by the ease of which information can spread, both through intentional and unintentional means. There’s really no quick and easy solution for it, more the responsibility is on everyone involved to avoid divulging information that might not be appropriate. Hopefully between that and a few more technological advances we can do away with spoilers of this nature for good.
It’s been almost 6 years since I first began writing this blog. If you dare to troll through the early archives there’s no doubt that the writing in there is of lower quality, much of it to do with me still trying to find my voice in this medium. Now, some 1300+ posts later, the hours I’ve invested in developing this blog my writing has improved dramatically and every day I feel far more confident in my abilities to churn out a blog post that meets a certain quality threshold. I attribute much of that to my dedication to writing at least once a day, an activity which has seen me invest thousands of hours into improving my craft. Indeed I felt that this was something of an embodiment of the 10,000 hour rule at work, something that newly released research says isn’t the main factor at play.
The study conducted by researchers at Princeton University (full text available here) attempted to discern just how much of an impact deliberate practice had on performance. They conducted a meta analysis of 150 studies that investigated the relationship between these two variables and classified them along major domains as well as the methodology used to gather performance data. The results show that whilst deliberate practice can improve your performance within a certain domain (and which domain its in has a huge effect on how great the improvement is) it’s not the major contributor in any case. Indeed the vast majority of improvements are due to factors that reside outside of deliberate practice which seemingly throws the idea of 10,000 hours worth of practice being the key component to mastering something.
To be clear though the research doesn’t mean that practice is worthless, indeed in pretty much every study conducted there’s a strong correlation between increased performance and deliberate practice. What this study does show though is that there are factors outside of deliberate practice which have a greater influence on whether or not your performance improves. Unfortunately determining what those factors are was out of the scope of the study (it’s only addressed in passing in the final closing statements of the report) but there are still some interesting conclusions to be made about how one can go about improving themselves.
Where deliberate practice does seem to help with performance is with activities that have a predictable outcome. Indeed performances for routine activities show a drastic improvement when deliberate practice is undertaken whilst unpredictable things, like aviation emergencies, show less improvement. We also seem to overestimate our own improvement due to practice alone as studies that relied on people remembering past performances showed a much larger improvement than studies that logged performances over time. Additionally for the areas which showed the least amount of improvement due to deliberate practice it’s likely that there’s no good definition for “practice” within these domains, meaning it’s much harder to quantify what needs to be practiced.
So where does this leave us? Are we all doomed to be good at only the things which our nature defines for us, never to be able to improve on anything? As far as the research shows no, deliberate practice might not be the magic cure all for improving but it is a great place to start. What we need to know now is what other factors play into improving performances within their specific domains. For some areas this is already well defined (I can think of many examples in games) but for other domains that are slightly more nebulous in nature it’s entirely possible that we’ll never figure out the magic formula. Still at least now you don’t worry so much about the hours you put in, as long as you still, in fact, put them in.
Ever since the retirement of the Space Shuttle the USA has been in what’s aptly describes as a “launch gap”. As of right now NASA is unable to launch its own astronauts into space and instead relies completely on the Russian Soyuz missions to ferry astronauts to and from the International Space Station. This isn’t a particularly cheap exercise, coming in at some $70 million per seat, making even the bloated shuttle program look competitive by comparison. NASA had always planned to develop another launch system, originally slated to be dubbed Ares and developed completely from scratch, however that was later scrapped in favour of the Space Launch System which would use many of the Shuttle’s components. This was in hope that the launch gap could be closed considerably, shortening the time NASA would be reliant on external partners.
News comes today that NASA has approved the funding for the project which is set to total some $6.8 billion over the next 4 years. The current schedule has the first launch of the SLS pegged for some time in 2017 with the first crewed mission to follow on around 4 years later. Developing a whole new human rated launch capability in 7 years is pretty good by any standards however it also begs the question as to whether or not NASA should be in the business of designing and manufacturing launch capabilities like this. When Ares and SLS were first designed the idea of a private company being able to provide this capability was still something of a fantasy however that’s no longer the case today.
Indeed SpaceX isn’t too far off deploying their own human rated craft that will be capable of delivering astronauts to the ISS, Moon and beyond. Their current schedule has the first crewed Dragon flight occurring no sooner than 2015 which, even with some delays here and there, would still have it happening several years before the SLS makes its manned debut. Looking at the recent Dragon V2 announcement it would seem like they’re well on their way to meeting those deadlines which will give the Dragon several years of in-flight usage before the SLS is even available. With NASA being far more open to commercial services than they used to be it does make you wonder what their real desire for the SLS is.
There’s an argument to be made that NASA has requirements that commercial providers aren’t willing to meet which, when it comes to human rated vessels, is mostly true. Man rating a launch system is expensive due to the numerous requirements you have to meet so most opt to just not do it. SpaceX is the notable exception to this as they’ve committed to developing the man rated Dragon even if NASA doesn’t commit to buying launches on it. Still the cash they’re dropping on the SLS could easily fund numerous Dragon launches, enough to cover NASA off for the better part of a decade if my finger in the air maths is anything to go by.
The only argument which I feel is somewhat valid is that NASA’s requirement for heavy lift outstrips pretty much any commercially available launch system available today. There’s really not much call for large single payloads unless you’re shipping humans into space (we’ve got an awfully long list of requirements compared to our robotic cousins) and so most of the big space contractors haven’t built one. SpaceX has plans to build rockets capable of doing this (the Falcon XX) although their timeframes are somewhat nebulos at this point in time. Still you could use a small portion of the cash set aside for the SLS in order to incentivise the private market to develop that capability as NASA has done quite successfully with its other commercial programs.
I’ve long been of the mind that NASA needs to get out of the launch system business so they can focus their time and resources on pushing the envelope of our capabilities in space. The SLS might fill a small niche that’s currently unserviced but it’s going to take its sweet time in getting there and will likely not be worth it when it finally arrives.
I was never really a big fan of doing books of puzzles, like crosswords or sudoku. I understand the attraction to some degree, once you’ve got a modicum of skill in doing them such puzzles can be relaxing as you don’t really think about much else while you’re doing them. Since I’m primarily an escapist when it comes games the idea of doing puzzle only games like Lyne didn’t really appeal to me at first but after playing for a couple hours it became more of an optimization problem, one which had a very simple set of rules that could create rather complicated problems.
The basic premise of Lyne is simple, there’s a bunch of different coloured shapes and all you need to do is connect them all together. All of the shapes of one colour must be connected together so you can’t be tricky and skip certain blocks to make your life easier. Additionally every path can only be crossed once which means that the path you take for one colour will determine what paths are left available for the remaining ones. The number of puzzles available in Lyne is rather staggering, on the order of 600 or more by my guess, which should be enough to keep even the most intrepid puzzle solver busy for a while.
Lyne is incredibly simplistic in its aesthetic, using solid colours and distinct shapes for everything. At first I thought the unlocking of additional colour palettes was a bit of a gimmicky way of getting you to play for longer but they actually function really well as a visual break. Since the style is so basic everything starts to blend into each other after a while so changing up the colours helps to stop that from happening. The pallette unlocks seem to be spaced out evenly enough so that you’ll get a new one before you get bored of your current one which, I admit, did make me play for longer than I’d first anticipated.
The puzzle sets are well thought out, starting out easy in order to introduce the concept for that set and then ramping up the difficulty as you make your way through each of the puzzles. For me personally the most challenging ones always seemed to be somewhere in the middle of the set, usually because I was missing some trick that would enable me to progress through it. Indeed the more puzzles you do the more patterns you’ll recognise , something which can both simplify and complicate a puzzle for you. For me some basic rules like finding out which paths must go somewhere and dividing the board up by colour helped to get me past some of the trickier puzzles although even those could some times leave me in a tizzy.
The daily puzzles were an interesting aside from the regular sets as, from what I could tell, they are generated on the day using some kind of algorithm. In fact I think this is how most of the puzzles were likely generated however the sets have been guided somewhat by the developer whilst these sets seem to be far more random, with some of them being incredibly easy whilst the others horrendously complex. Still if you’re the kind of person that likes doing a puzzle daily then this will be a brilliant little feature for you as it’s almost guaranteed that these puzzles will be unique every day.
Lyne is an interesting minimalistic puzzle game that looks deceptively simple on first look. The mechanics are simple enough that you can figure it out without instruction but, like many things with simple origins, mastering those rules will prove to be far more challenging. Like all games of this nature though it does tend to become somewhat repetitive after a while however if you’re the kind of person who thrives on technical challenges like this then Lyne will provide endless hours of enjoyment.
Lyne is available PC, Windows Phone, Android and iOS right now for $2.99, $2.49, $2.99 and $2.99 respectively. Game was played on the PC with4 hours of total play time and 27% of the achievements unlocked.