You don’t have to look far to know what the greater gaming community thinks of the latest installment in the SimCity series. The first couple weeks were plagued with issues with many people being simply unable to play while the lucky few who got in experienced multiple, game breaking bugs. Accusations flew left and right with Maxis eventually stating it was all their fault although it was hard to deny that EA had a hand in it as well. It was so bad that EA even offered all purchasers another free game in order to compensate for it (even if you bought after they announced this offer, which I did). Still with all that in mind I tried to approach SimCity with an open mind as possible, hoping to see the game outside of all the teething issues that have plagued it relentlessly.
SimCity is, just like all its predecessors, a game that revolves around building and improving your very own city. You’re given a small amount of cash and your choice of various plots of land to begin with and after that its up to you to make it on your own. There are numerous factors that influence how your city develops, from your layout to natural resources and even how well developed the greater region is. There’s no in built narrative to speak of but the story of each individual city will be different which leads to some great conversations about how you overcame the various adversities sent your way.
Graphically SimCity is very reminiscent of other Maxis games in that they’re not exactly cutting edge but that has the advantage of running on pretty much everything. The use of tilt-shift perspective for when you’re zoomed in quite far is a nice touch although it doesn’t help hide some of the extremely low poly models used. A quick bit of searching reveals this isn’t the first 3D SimCity and its predecessor, SimCity Societies, looks pretty similar. Considering that game was released over 5 years ago now I would’ve expected a much bigger jump up, especially considering neither of those titles were available on consoles (and before you ask why they’d release it on consoles they did exactly that with SimCity2000).
You begin by either joining someone else’s region or creating one of your own. Joining someone else s has the advantage of potentially giving you a lot of benefits if they’ve got some big cities set up already, like access to upgraded buildings before you’d have the capability to build them, but like most people I chose to start out on my own. After creating your region you’re then sent to select a section of it to begin building your town in, and thus your journey begins to becoming the world’s best mayor.
From a core game play perspective there’s not a whole lot that’s changed over the years. You build roads, which now come with handy guide lines so you don’t make odd sized sections, zone them up for Residential/Commercial/Industrial and then wait for people to arrive. As more people come into your town their requirements for various bits of infrastructure increase so you’ll quickly be adding things like water towers, power stations, sewage outlet pipes and so on. Unlike previous SimCity games you don’t have to lay each bit of connecting infrastructure separately as everything follows the road which makes things a heck of a lot easier. Eventually you’ll reach a point where you want to start attracting higher wealth individuals to your town and that requires some rather careful planning.
It’s all well and good to lay everything out in order to maximize the amount of space available for people to build on, and indeed that’s what will drive your population forward in the beginning, however you’ll eventually need to add additional services on which have circular areas of influence. This is somewhat at odds with the regular way of doing things, especially if you’re using the guide lines which can lead to some hard decisions. Early on its not too bad but later on when you’re dealing with giant skyscrapers the decision to knock one down in an attempt to make the rest of the region more desirable can back you into some painful corners. This is all part of the challenge however as your progression from a low density, low wealth town to a high density. high wealth one is predicated on how well you can make decisions like that.
Like I mentioned previously one city’s progress benefits the whole region and thus there’s really no shame in starting another town should you tire of your current one. Indeed I found my stride somewhere in the middle of my third city, one that was able to leverage off all the other upgrades my previous towns had. It’s also very clear that some locations are far more ideal than others as any place with hills in them is pretty much guaranteed to be unusable, so whilst choosing a lake frontage with mountaintop views sounds like a good idea initially you’ll likely hit its limit far faster than you would a boring, flat patch of dirt in the middle of nowhere.
Now I deliberately avoided playing Sim City until things had calmed down in the hopes that I could avoid some of the issues that causes such an uproar. I did have a few teething issues stemming from my Origin not being installed properly (although all other Origin games work fine, strangely) and the installer simply refusing to run but I was past that I was always able to login. Unfortunately I was unable to play with one of my friends due to him starting on region 2 and I on region 1, something which I thought wouldn’t be a problem but EA has locked those regions down to only those who’ve played on them before. Sure we could start over again but that’s not what we wanted to do, which was a little annoying.
Whilst that was irritating it was nothing compared to the dumb as a rock AI that SimCity uses. Now there’s been quite a bit of investigation into why this is but it all boils down to the pathfinding algorithm which is used for pretty much everything in the game. Sims, cars, electricity, etc. all use the absolute shortest path to get to a destination. Because of this you get a whole lot of really illogical, emergent behavior from various systems. The best (or worst, really) example I can come up with is in one of my towns there’s 2 garbage dumps, each with numerous trucks. However upon picking up rubbish they will all go back to the same garbage dump, even though it’s full and the other one is not much further away. The only way to get around this is to make them almost identical in length (I.E. right next to each other) which is a right pain in the ass. You’ll also find that this will affect things like buildings in certain areas (some commercial/industrial places will never get workers and be routinely abandoned). You can work around this with careful city planning but realistically you shouldn’t have to as the AI should be smart enough to apply costings to paths that would avoid those situations completely.
There also comes a time when your city has reached a certain point and there’s not much more you can do to it until you get more money or your population increases. When this happens you’re pretty much relegated to waiting out the clock which can get rather boring. Indeed I found that once I was getting around the 75,000 population mark there was little I could do to speed up the population growth as anything I did either did nothing or caused a dip before it recovered again. Now I just might not be getting it or reached the limitations of my current city design but since all my advisors weren’t saying anything productive and my approval rating was 85% I struggled to see what else I could do. Searching for some guides also didn’t really help out either, which just led to me giving up on the city and trying again.
I found it pretty easy to lose a lot of time on SimCity as the initial stages are always a fun little balancing act that drew me in much like Anno 2070 did. Still there was always a timer ticking in the background, counting down to the point where I’d be unable to see a way to grow my city further and would simply go again. I’m glad to say that the majority of the issues that plagued its launch are gone now but there are still some teething issues with the initial game process and the dumb as bricks AI can’t be updated quick enough. Overall it’s an average game which unfortunately falls short of many of the expectations placed on it, but none of them are beyond fixing. Well, apart from Maxis/EA’s reputation however.
Sim City is available on PC right now for $99.99. Total play time was 9 hours.
The AAA game industry is unquestionably a hit-based business and consequently that means there isn’t a lot of room in the market for dozens of companies to compete successfully. Whilst there are many companies making a rather good living from such games, able to deliver title after title that will sell 10 million+ copies, they’re predominately sequels in established IPs who’s success stems largely from their dedicated fan base. Smaller publishers with larger aspirations are still quite numerous though with many of them burning through untold amounts of capital in the hopes of replicating such success. As far as I can tell this way of doing business isn’t sustainable but that doesn’t mean that quality titles have to disappear.
Square Enix recently published its sales figures for its last 3 big hit games and for plebs like me they don’t look too shabby. Indeed there are many titles I know of with lesser sales figures that were considered wildly successful and I’m not just talking about runaway indie hits. Heavy Rain, for example, would be considered easily around the same level of quality as any of the above titles and it has managed to snag some 2 million sales over the course of its life. Quantic Dream had said previously that their expectations were more around the 200~300,000 mark so the order of magnitude increase was completely unexpected, showing that big sales aren’t required to produce polished games. Turning back to Square Enix then you have to wonder what drove them to expect much higher sales, especially in light of their past performance.
I think the main reason is the amount of capital they invest in these titles, thinking that will have a direct causative effect on how many sales they’ll get out of it at the end. Whilst this is true to a point I don’t think that Square Enix is doing this efficiently as whilst their games are objectively good (on par with those who’s sales are much higher) most of them simply lack the dedicated community which drives those massive sales. In that regard then Square Enix needs to drastically cut either its overall sales expectations and rework their game development budgets accordingly because if selling multiple millions of copies isn’t profitable¹ then you’ve got to seriously reconsider your current business practices.
Indeed I feel this is a major issue with the games industry today. Many of the bigger titles are developed with big sales in mind and that means both developers and publishers aren’t willing to take risks on titles that might not perform. Sure we get a few token efforts from them every so often but it’s a sign of how little innovation there is from the big guys when the indie developers are able to churn them out by the truck load. I’m not saying its better or worse if either side of the industry does the innovation, more that the big developers and publishers are stuck in a rut of churning out sequels or, in the case of Square Enix, thinking they’ll make it big if they copy the formula.
¹They haven’t said that any of these titles weren’t profitable but their predicted $138 million dollar loss this year would seem to indicate that none of them were. The loss could also be heavily influenced by the redevelopment of their failed Final Fantasy MMORPG FFXIV, but the breakdown didn’t go into this unfortunately.
Microsoft’s flagship product, Windows, isn’t exactly known for it’s rapid release cycle. Sure for things like patches, security updates, etc. they’re probably one of the most responsive companies out there. The underlying operating system however is updated much less frequently with the base feature set being largely the same for the current 3 year product life cycle. In the past that was pretty much sufficient as the massive third party application market for Windows made up for anything that might have been lacking. Customers are increasingly looking for more fully featured platforms however and whilst Windows 8 is a step in the right direction it had the potential to start lagging behind its other, more frequently updated brethren.
Had Windows 8 stayed as a pure desktop OS this wouldn’t be a problem as the 3 year product cycle fit in perfectly with their largest customer base: the enterprise. Since Windows 8 will now form the basis of every Microsoft platform (or at least the core WinRT framework) they’re now playing in the same realm as iOS and Android. Platform updates for these two operating systems happen far more frequently and should Microsoft want to continue playing in this field they will have to adapt more rapidly. Up until recently I didn’t really know how Microsoft was planning to accomplish this but it seems they’ve had something in development for a while now.
Windows Blue is shaping up to be the first feature pack for Windows 8, scheduled for release sometimes toward the end of this year. It’s also the umbrella term for similar updates happening across the entire Microsoft platform around the same time including their online services like Outlook.com and SkyDrive. This will be the first release of what will become a yearly platform update that will bring new features to Windows and its surrounding ecosystem. It will not be in lieu of the traditional platform updates however as there are still plans to deliver Windows 9 on the same 3 year cycle that we’ve seen for the past 2 Windows releases.
Whilst much of the press has been around the leaked Blue build and what that means for the Windows platform it seems that this dedication to faster product cycles goes far deeper. Microsoft has shifted its development mentality away from it’s traditional iterative process to a continuous development process, a no small feat for a company of this magnitude. Thus we should expect the entire Microsoft ecosystem, not just Windows, to see a similarly rapid pace of development. They had already done this with their cloud offerings (as it seems to gain new features every year) and the success they saw there has been the catalyst for applying it to the rest of the their product suites.
Microsoft has remained largely unchallenged in the desktop PC space for the better part of 2 decades but the increasing power of mobile devices has begun to erode their core business. They have then made the smart move to start competing in that space with an unified architecture that will enable a seamless experience across all platforms. The missing piece of the puzzle was their ability to rapidly iterate on said platform like the majority of their rivals were, something which the Blue wave of products will begin to rectify. Whether it will be enough to pull up some of their worse performing platforms (Windows Phone) will remain to be seen however, but I’m sure we can agree that it will be beneficial, both for Microsoft and us as consumers.
If there’s one thing that Australia has going for it at the moment it’s the duo of a well regulated banking industry coupled with a strong economy that has seen us weather some of the worst financial crisis we’ve seen in decades. The Global Financial Crisis came and went without leaving much of a lasting impact and for the most part we’ve been immune to the Eurozone Crisis. For an industry that relies on trust you really couldn’t find a better environment than Australia at the moment as compared to nearly every other place on earth the trust in our banking system is extremely high.
If I was to choose a place that is the exact opposite my country of choice would of course be Cyprus. For the uninitiated Cyprus is a small island nation of about 1 million people or so and is renown for being something of a tax haven. This is due to its extremely favourable tax rates on savings accounts there and led to the banks storing more wealth than the entire nation’s GDP. When everything’s going well this isn’t much of a problem as the steady flow of capital helps keep both the nation and the banks afloat. However when things turn bad, like they have done during the Eurozone Crisis, what you have is an island nation that’s left in a rather difficult situation as it lacks the tools to deal with such colossal entities failing.
The issues stem from the Greek financial crisis as the Cyprian banks had amassed some €22 billion worth of Greek private sector debt. As a result of the writing down of much of this debt in order to save Greece (and thus the Euro itself) the Cyprian banks were hit hard by this and in turn had their credit rating downgraded. This lead to a downward spiral of bad debt piling up, banks defaulting on loan payments and the Cyprian government, with a GDP below that of the debt their banks had amassed, being completely unable to deal with it. So like any other EU member they approached European Commission, the International Monetary Fund, and the European Central Bank for a bailout. They were able to secure one however before they could get it they needed to raise some €7 billion and the method by which they did this was, to put it bluntly, incredibly retarded.
The initial proposal to raise these funds was a one off tax on all savings deposits with accounts under €100,000 losing 6.7% and above that losing 9.9%. They began musing this particular deal over the weekend in order to be able to enact the legislation before everyone had a chance to get their money out but as soon as news began to spread the beginnings of a bank run started taking shape. ATMs were quickly emptied of their cash and long lines formed as people tried to get as much of their cash out of Cyprian banks before they were slugged with the tax. The initial proposal didn’t get through however and the Cyprian government had to order the banks not to open and they’ve been closed ever since.
News reaches us today that the Cyprian government has managed to reach a resolution with the one off tax now being restricted to accounts over €100,000. What the particular rate will be though remains a mystery but you can guarantee it will have to be higher than the initial proposal to make up for the revenue lost on accounts below that threshold. The deal will also see one of the bigger banks broken down into a toxic asset dump and a small, feasible business but there have been calls for the same thing to happen to its largest bank. No matter what they end up doing however the damage has been done to their banking industry and I’m not sure it’ll ever be able to recover.
You see banking relies on a certain amount of trust, especially when it comes to things like savings accounts. You trust your bank won’t lose your money and, in the case of the government, you trust that they won’t come after it unless you’re directly responsible for something. The Cyprian people, and their foreign depositors, are essentially being punished for the mistakes of the banks and there’s no amount of guarantees that they can make that something like this won’t happen again. Thus the only smart thing for anyone to do is to get their money out of there as soon as humanly possible lest the same thing repeat itself in the future.
It’s not like this couldn’t happen elsewhere, indeed New Zealand is considering a similar move, but the reputation Cyprus had as a great place to store capital is now in tatters. Future depositors will think twice before sending money there again because it’s clear that the tiny nation can’t deal with the mistakes of its banks due to the huge influence they have their economy. After the tax goes down I doubt any of the large creditors will be keeping their money in there for long and its likely a bank run will still occur once the banks reopen their doors. With that the finance industry in Cyprus will be dealt a crippling blow, one which it will be unlikely to recover from.
It might be for the good of the country in the long term however since no one will store capital there any more it’s unlikely they’ll get into a situation like this again. I’m not entirely sure that’s a good thing though as it takes an axe to what was once a very profitable industry for the Cyprian people. Realistically though the blame for all of this lies directly with their government, one that should have taken better precautions to avoid a situation like this in the first place.
There’s little doubt that the majority of the games industry is skewed towards the male gender. Primarily this is because it was a male dominated industry in both production and consumption for much of its nascent life. Depending on your platform of choice this is still very much the case (although strangely 40% of PC gamers are women, compared to 25% on consoles) but overall the balance is much closer to the actual gender split than it ever has been before. With that in mind you’d think that the choice to use a female protagonist, something that isn’t exactly a new idea, wouldn’t be exactly controversial.
News came last week however that Dontnod Entertainment, an independent developer based out of Paris, struggled to convince publishers to accept their game which features a female lead character:
“We had some [publishers] that said, ‘Well, we don’t want to publish it because that’s not going to succeed. You can’t have a female character in games. It has to be a male character, simple as that,'” he told Penny Arcade. “We wanted to be able to tease on Nilin’s private life, and that means for instance, at one point, we wanted a scene where she was kissing a guy. We had people tell us, ‘You can’t make a dude like the player kiss another dude in the game, that’s going to feel awkward.’
I’ll admit that given a choice between playing a male or a female character in a game I’ll choose the male one. For me the reasoning is simple, if given the choice I feel like I’m projecting myself into the game and thus want my avatar to represent me (and, given the choice, will recreate myself in the ultimate form of narcissism). However my experience doesn’t really differ that much should that choice be made for me one way or another as then I’m playing as that character in the story, not as a direct representation of myself. Thus the idea of having my character kiss another guy (or girl) in the game won’t make me feel awkward unless that what was intended.
What gets me though is the idea that the publisher thought that a game wouldn’t sell due to the female lead. Sure if you’re targeting consoles there’s an argument to be made that you want to target your largest available audience. However with titles like Tomb Raider breaking its own sales records, even on consoles, that kind of logic doesn’t really hold up. It’s not just the success of that particular franchise either as things like the Dreamfall Chapters Kickstarter showed that there’s lots of demand for these types of games and not just from the female gamer crowd.
Honestly when I first read this my anger was directed at the general gaming populace as I felt that that’s where the publishers would be drawing these conclusions from. Digging deeper into it however I feel that it’s more the publisher making those decisions for us as there are many examples of great selling games that have female protagonists or even just strong female characters. Personally I feel that us gamers are far more comfortable with the idea than the publishers give us credit for, especially with all the recent success stories. Hopefully it’s just one naive publisher executive making an incorrect call as I and all my gamer friends certainly have no issues with strong female leads.
Long time readers will know that Starcraft II: Wings of Liberty has long held the crown for the highest rated game here on The Refined Geek. It’s not an undeserved title either as they managed to capture in my attention in a way few games have been able to and indeed only one (DOTA 2) has been able to do so since. From the start I knew it was set to be a trilogy carving the game up into 3 separate installments each of which would focus on a single race. Heart of the Swarm, the second game in the Starcraft II trilogy, continues the story started in Wings of Liberty and as the name implies focuses primarily on the Zerg race.
Heart of the Swarm picks up not too long after the final events that take place in Wings of Liberty. Kerrigan has been locked away in a test facility run by prince Valerian who is eager to see how much control she retains over the Zerg. Shortly after the final test is complete (which had resulted in Kerrigan using the Zerg to destroy much of the test facility) Dominion forces attack, forcing them to evacuate. However in the confusion Raynor is unfortunately left behind and Kerrigan refuses to leave without him. After waiting for him to contact her she reads a news report that he was captured and summarily executed, causing Kerrigan to swear brutal vengeance against Mengsk yet again.
As always Blizzard has delivered an incredibly beautiful game, one that will run well on nearly any system built within the past 4 years. Whilst the in-game graphics haven’t changed significantly, apart from higher-resolution textures and better lighting (which you could say is significant, I guess), the whole game feels a heck of a lot more polished. The in-between mission cut scenes, dialog sequences and cinematics have all seen improvements which are very obvious when comparing them side by side.
From a core game perspective Heart of the Swarm doesn’t change much with the standard real time strategy mechanics applying throughout the game. However like Wings of Liberty not every mission is simply a build army, send at enemy, rise and repeat deal with most of the missions being rather unique in their implementation. Of course there are your standard base/army building type missions however most of them have an unique twist to them which can make them more complicated or provide opportunities to make them far easier, should you be willing to take the risk.
Whilst this might not be too different from Wings of Liberty (although individually the missions are all very different) the levels do seem to be better designed as I can remember struggling to get into the campaign in the original whilst it didn’t take me long to get hooked on Heart of the Swarm. Indeed since all the missions are so varied and unique I rarely found myself becoming bored with them. This ended up with me engaging in a rather ravenous binge on missions which only stopped when I realised I was playing on into the early hours of the morning. That hasn’t happened to me in a while and is a real testament to the quality of each mission in Heart of the Swarm.
Outside of the core missions you’ll be given the opportunity to upgrade your units, giving them unique abilities that will make them far more effective in game. There’s 2 types of upgrades that will be available for all of your units, the first being a choice of 3 different types of specializations which you can change at any time. The second is a permanent change to the unit itself giving it either additional abilities (like the Raptor Zergling pictured about which can now leap at targets and jump up cliffs) or giving it an evolutionary path (like the Hydralisk being able to evolve into a lurker). Thankfully you’re not making this decision blind as all of the permanent evolutions come alongside a mission that gives you a feel for how the new unit will behave and where it will be effective.
For long time Starcraft players the upgrade paths have a pretty obvious “best” path as certain combinations become almost completely unstoppable. Sure each of them is viable in their own sense and some choices are better than others in some situations however my initial combination of frenzied hydras with roaches that slowed was enough to melt most armies without too much hassle. Once I got respawning ultralisks it was pretty much game over for any large army as they couldn’t kill them quick enough and all their precious siege defences just melted away, leaving the rest of their army vulnerable.
Wings of Liberty included some hero units but apart from the basic in game upgrades (which were only available during base building missions) there wasn’t much you could do to customize them. Heart of the Swarm often gives you direct control over Kerrigan and her list of abilities is quite impressive. The good thing about this is you can craft her to fit your playstyle effectively as you can play her as a big spell nuker, tanky siege destoryer or 1 woman army that can take out bases without the assistance of any other units. On the flip side however this can make it feel like your army is just like an accessory for Kerrigan, something that’s nice but not necessarily required.
For me I went with a tanky building that favoured direct attacks over spells. Her attacks would chain and she would attack faster with each subsequent attack which would allow her to melt armies in no short order. Couple that with a spammable healing ability and she was for the most part invincible and should she get into trouble I could simply walk her out of there whilst healing her every 8 seconds. It did seem somewhat unfair at times as since the heal was AOE I could keep my army going far longer than it should have been able to normally which usually meant once I hit 200/200 I rarely found myself building any further units. I get that she’s supposed to be an immensely powerful being but she does take some of the challenge out of it. Maybe it’s different on brutal (I played on hard, for what its worth).
Although there were no bugs to report, even with the streaming which I thought would cause all manner of strife, there were a couple issues that marred my experience in Heaert of the Swarm. Whilst the out of mission upgrades were good they were often choices between upgrades that were available in multiplayer games. As someone who played Zerg back in Wings of Liberty (well, I randomed for a long time so I played all races) I often found myself missing some upgrades that overcome the inherit weaknesses of particular units. The removal of larva injects also didn’t sit particularly well for me as that was an in-grained habit and its removal relegated the queens to creep tumour/heal bots which, after a certain point in the game, relegated them to units I’d only build when I was running low on larva. These aren’t systemic issues with the game per se, but they definitely detracted from my experience.
Warning: plot spoilers below.
I also can’t praise the story as highly as I did back with Wings of Liberty as Kerrigan starts off strong but quickly degenerates into a character with confused emotions who makes decisions that don’t make a whole bunch of sense. This might be because the over-arching plot is somewhat predictable (the twist about Raynor for instance) and when her motivations don’t line up with the direction you think they’d be going in it just feels…weird. I did like the nods to previous unresolved plot threads from the original Starcraft series (if you can’t figure out who Narud is then your head is on backwards, hint hint) as Wings of Liberty only half alluded to them. The foreshadowing for the final instalment has got me excited for what’s to come however, even if the story might end up being not much more than your generic sci-fi action movie.
Plot spoilers over.
Starcraft II: Heart of the Swarm is a solid follow up to Wings of Liberty providing a highly polished game experience that is par for the course for Blizzard games. All of the missions feel unique, banishing the usual RTS campaign drudgery and creating an experience that is both challenging and satisfying. Unfortunately I can’t rate it as highly as its predecessor as my many hours in multiplayer set up expectations which would probably never be met and the strange treatment of Kerrigan as a central character marred an otherwise great experience. Still these are comparatively minor nit picks in a game that drew me in and trapped me for hours and I would do it again willingly.
Starcraft II: Heart of the Swarm is available on PC right now for $48. Game was played on hard with around 12 hours of total play time and 35% of the achievements unlocked.
If you’ve ever worked in a multi-tenant environment with shared resources you’ll know of the many pains that can come along with it. Resource sharing always ends up leading to contention and some of the time this will mean that you won’t be able to get access to the resources you want. For cloud services this is par for the course as since you’re always accessing shared services and so any application you build on these kinds of platforms has to take this into consideration lets your application spend an eternity crashing from random connection drop outs. Thankfully Microsoft has provided a few frameworks which will handle these situations for you, especially in the case of Azure SQL.
The Transient Fault Handling Application Block (or Topaz, which is a lot better in my view) gives you access to a number of classes which take out a lot of the pain when dealing with the transient errors you get when using Azure services. Of those the most useful one I’ve found is the RetryPolicy which when instantiated as SqlAzureTransientErrorDetectionStrategy allows you to simply wrap your database transactions with a little bit of code in order to make them resistant to the pitfalls of Microsoft’s cloud SQL service. For the most part it works well as prior to using it I’d get literally hundreds of unhandled exception messages per day. It doesn’t catch everything however so you will still need to handle some connection errors but it does a good job of eliminating the majority of them.
Currently however there’s no native support for it in Entity Framework (Microsoft’s data persistence framework) and this means you have to do a little wrangling in order to get it to work. This StackOverflow question outlines the problem and there’s a couple solutions on there which all work however I went for the simple route of instantiating a RetryPolicy and then just wrapping all my queries with ExecuteAction. As far as I could tell this all works fine and is the supported way of using EF with Topaz at least until 1.6 comes out which will have in built support for connection resiliency.
However when using Topaz in this way it seems that it mucks with entity tracking, causing returned objects to not be tracked in the normal way. I discovered this after I noticed many records not getting updated even though manually working through the data showed that they should be showing different values. As far as I can tell if you wrap an EF query with a RetryPolicy the entity ends up not being tracked and you will need to .Attach() to it prior to making any changes. If you’ve used EF before then you’ll see why this is strange as you usually don’t have to do that unless you’ve deliberately detached the entity or recreated the context. So as far as I can see there must be something in Topaz that causes it to become detached requiring you to reattach it if you want to persist your changes using Context.SaveChanges().
I haven’t tested any of the other methods of using Topaz with EF so it’s entirely possible there’s a way to get the entity tracked properly without having to attach to it after performing the query. Whether they work or not will be an exercise left for the reader as I’m not particularly interested in testing it, at least not just after I got it all working again. By the looks of it though a RC version of EF 6 might not be too far away, so this issue probably won’t remain one for long.
Ever wondered how we evolved to look the way we did today from our ancestors that lived millions of years ago? Wonder no longer:
I often find myself digging through our evolutionary history in order to find out why we have certain features or why we seem to lack certain adaptations that other species have. Whilst I don’t have a good explanation for everything that’s shown in the video above (had I more time I’d get my wife, a fledgling biologist, to comment on it) it is curious to see things like the progress of the nose and the reduction of the large forehead. It also struck me as to just how subtle some of the changes are from generation to generation and yet that gradual accumulation ends up with the face we all recognize.
The best thing about this video is how clear it makes the transition from our common ape ancestor to our current form as homo-sapiens. Whilst I know that simply showing someone a video like this won’t be enough to convince them that evolution is real (indeed if you don’t want to understand it there’s little I can do for you) it does illustrate the point quite aptly. It also demonstrates the idea that whilst we shared a common ancestor we evolved along a different path alongside them, addressing the “well if we evolved from apes why are there still apes” question quite nicely.
Vaccines are the most effective form of disease prevention as they train our bodies to respond to them long before we encounter them in wild. They’re responsible for systemically wiping out several diseases that caused countless numbers of deaths around the world and have saved people from the life long consequences that survivors of said diseases would have to struggle with. You’d think with proven benefits like that the choice to use them, especially for the most vulnerable groups of people (I.E. children and the elderly), would be a no-brainer. Unfortunately it seems that as more time passes the more often I come across articles detailing the increase prevalence of anti-vaxxers and I’m struggling to understand why.
Whilst the anti-vaxxer movement isn’t exactly new, indeed as long as there has been vaccines there have been those who have been opposed to them, this current wave can trace its origins back to Andrew Wakefield’s long since discredited research linking the MMR vaccine to autism spectrum disorders in children. Even though he has since been very publicly shamed over the matter people still seem to link vaccines with all sorts of disorders they are simply incapable of producing. Worse still is the fact that this baseless fear is now spreading to other vaccines, modern ones with impeccable safety and efficacy records.
This little bastard is the Human Papilloma Virus (HPV) which is responsible for nearly all cervical cancers found in women today. Thankfully we have a vaccine for it now and all it requires is 3 shots over the course of 6 months to eliminate the risk of ever getting it. The vaccine is most effective when delivered to children when they’re young or in their early teens but it is still effective in older individuals (my wife had hers when she was in her early twenties). Recent studies show that despite its proven track record of efficacy and safety parents are becoming increasingly worried about it with many stating that they’ll never vaccinate their children for HPV:
A rising percentage of parents say they won’t have their teen daughters vaccinated to protect against the human papilloma virus, even though physicians are increasingly recommending adolescent vaccinations, a study by Mayo Clinic and others shows. More than 2 in 5 parents surveyed believe the HPV vaccine is unnecessary, and a growing number worry about potential side effects, researchers found. The findings are published in the new issue of the journal Pediatrics.
Five years ago, 40 percent of parents surveyed said they wouldn’t vaccinate their girls against HPV. In 2009, that rose to 41 percent, and in 2010, to 44 percent.
Let’s tackle the idea that the vaccine is unnecessary first as this means parent’s believe their children simply don’t need it, something which should be easy to prove by looking up cancer rates. I’d accept that it’d be unnecessary if the incident rates were low but the fact of the matter is that cervical cancer is the second most common form of cancer in women and the fifth most deadly. The rates might look statistically low however if you could eliminate that risk with a simple (and usually free) vaccination course I think you’d do it if it was any other form of cancer. Calling it unnecessary simply shows your ignorance of how prevalent it really is.
The side effects of the HPV vaccine are also well known and for the vast majority (we’re talking 99.9999% here, and I’m not exaggerating) are mild and easily treatable with over the counter analgesics. In those rare cases where there are severe reactions doctors are trained in how to respond to them and patients will fully recover in short order. All of the other reported side effects, everything from waking comas to deaths, can not be casually linked to the vaccine. Indeed in the 20 or so cases of deaths reported as adverse reactions to the vaccines none of them were found to be caused by the vaccine and were explained by other factors. Considering some 40 million people have been vaccinated with it so far and we can’t attribute anything but eradication of cancer and some mild side effects I think its fair to assume its safe.
I know I’ve been beating this horse (which seems to keep reviving itself) for some time now but it does really get to me that people are being wilfully ignorant of the facts behind vaccines about how effective, safe and necessary they really are. Sadly whilst it didn’t take me long to find all this information it was shown right alongside a whole treasure trove of anti-vaxxer bullshit which is why I continue to write things like this. It’s my hope that someone looking for good information on the subject will stumble across my posts like these and hopefully be convinced that vaccines really are worth it.
My introduction to RSS readers came around the same time as when I started to blog daily as after a little while I found myself running dry on general topics to cover and needed to start finding other material for inspiration. It’s all well and good to have a bunch of bookmarked sites to trawl through but visiting each one is a very laborious task, one that I wasn’t keen to do every day just to crank out a post. Thus I found the joys that were RSS feeds allowing me to distill dozens of sites down to a singular page, dramatically cutting down the effort required to trawl through them all. After cycling through many, many desktop based readers I, like many others, eventually settled on Google Reader, and all was well since then.
That was until last week when Google announced that Reader was going away on July 1st this year.
Google has been doing a lot of slimming down recently as part of its larger strategy to focus more strongly on its core business. This has led to many useful, albeit niche, products to be shutdown over the course of the past couple years. Whilst the vast majority of them are expected there have been quite a few notable cases where they’ve closed down things that still have a very active user base whilst other things (like Orkut, yeah remember that?) which you’d figure would be closed down aren’t. If there’s one service that no one expected them to close down it would be Reader but apparently they’ve decided to do this due to dwindling user numbers.
Whilst I won’t argue that RSS is the defacto standard for content consumption these days it’s still proven to be a solid performer for anyone who provides it and Google Reader was the RSS reader to use. Even if you didn’t use the reader directly there are hundreds of other products which utilize Google Reader’s back end in order to power their interfaces and whilst they will likely continue on in spite of Reader going away it’s highly unlikely that any of them will have the same penetration that Reader did. Even from my meagre RSS stats it’s easy to tell that Reader has at least 50% of the market, if not more.
If you doubt just how popular Reader was consider that Feedly, shown above syncing with my feeds, managed to gain a whopping 500,000 users in the short time since Google made the announcement. They were actually so popular that right after the start their site was down for a good couple hours and their applications on iOS and Android quickly becoming the number 1 free app on their respective stores. For what its worth it’s a very well polished application, especially if you like visual RSS readers, however there are a few quirks (like it not being in strict chronological order) which stopped me from making the total switch immediately. Still the guys behind it seem dedicated to improving it and filling in the void left by replicating the Reader API (and running it on Google’s AppEngine, for the lulz).
From a business point of view it’s easy to understand why Google is shutting down services like this as they’re a drain on resources that could be better used to further their core business. However it was usually these niche services that brought a lot of customers to Google in the first place and by removing them they burn a lot of goodwill that they generated by hosting them. I also can’t imagine that the engineers behind these products, many of which were products of Google’s famous 20% time, feel great about seeing them go away either. For something as big as Reader I would’ve expected them to try to innovate it rather than abandon it completely as looking over the alternatives there’s still a lot of interesting things that can be done in the world of RSS, especially with such a dedicated user base.
Unfortunately I don’t expect Google to do an about face on this one as there’s been public outcries before (iGoogle, anyone?) but nothing seems to dissuade them once their mind has been made up. It’s a real shame as I feel there’s still a lot of value in the Reader platform, even if it pales in comparison to Google’s core business products. Whilst the alternatives might not be 100% there yet I have no doubt they’ll get there in short order and, if the current trend is anything to go by, surpass Reader in terms of features and functionality.