Posts Tagged‘public’

Google’s App Engine Available For On Premises Deployment.

The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.

Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.

Google App Engine

Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.

They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.

Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.

¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.

You Make Your Own Education (or Can You?).

Over the weekend the wife and I watched a documentary on the American education system called Waiting for Superman, here’s the trailer:

The documentary dives deep into the American public education system and the crux of it is that whilst there are some fantastic public schools there the problem is that space at those schools are limited. In order to resolve this situation the government has legislated the only thing that can be equally fair to all involved: public schools with more applicants than places must have a lottery to determine who gets in and who doesn’t. It’s eye opening, informative and heart wrenching all at the same time and definitely something that I’d recommend you watch.

The reason it hit home for me was because of the parallels that I could draw to my own education experience. My parents had had me on the waiting list for one of Canberra’s most respected private schools since the day I was born. I went to a public school for my initial education but I was always destined for a life of private education. However upon attending that school I was miserable, the few friends that did make the transition to the same school abandoning me and the heavily Anglican environment (with mandatory bible studies classes) only making things worse.

The straw that broke my parent’s back was when I made my case for transferring me to a public school where most of my friends had ended up. They couldn’t get through to me that the private school I was going to was the best place for me to be educated but one thing I said changed their minds: “You make your own education”. I still wonder if I actually uttered those exact words or just something along those lines (I don’t have a vivid memory of that incident, but my parents say it was so) but that was enough for them to let me transfer. If I’m honest the transfer didn’t make things any better, although I told myself differently at the time, but suffice to say I can count myself amongst the few who did make it to university after going to that school. Heck you might even say I’ve been successful.

Anecdotally then public education system in Australia seems to work just fine. The schools I went to had a rather rough reputation for not producing results (and indeed my university entrance score was dragged down a good 5 points due to my attendance there) but there were students that excelled in spite of it. However when watching Waiting for Superman I got this sinking feeling that in the USA they might not even have the chance to make their own education simply because the schools are set up for failure. Indeed my own success might have blinded me to the fact that the schools I went to were set up in such a way, leading me to believe there was no problem when there was one.

Cursory research however shows that, at least for Australia, this isn’t the case. Indeed the biggest indicators of child’s success at school and their pursuit  of higher education is largely dependent on non-school factors. Following on from that idea it’s not just you who makes your education, but your entire social structure that supports it. Bringing that back to my experience shows then that it was my strong family support that lead for me to do well and my late found group of friends who led me to excel at university. In that respect I should feel incredibly lucky but in reality it’s got little to do with luck and more to do with a whole lot of dedicated effort on the parts of everyone who had been involved in my life during my education.

Still we should be thankful for the education system that Australia has, especially when you compared it to what it could be. I’m still a strong believer in those words I uttered well over a decade ago and whilst they might not be applicable everywhere in the world they are definitely applicable here.

Google+ API is Here, But is it Enough?

Google+ has only been around for a mere 2 months yet I already feel like writing about it is old hat. In the short time that the social networking service as been around its had a positive debut to the early adopter market, seen wild user growth and even had to tackle some hard issues like their user name policy and user engagement. I said very early on that Google had a major battle on their hands when they decided to launch another volley at an another silicone valley giant but early indicators were pointing towards them at least being a highly successful niche product at the very least, if for the only fact that they were simply “Facebook that wasn’t Facebook“.

One of the things that was always lacking from the service was an API that was on the same level as its competitors. Both Facebook and Twitter both have exceptional APIs that allow services to deeply integrate with them and, at least in the case of Twitter, are responsible in a large part for their success. Google was adamant that an API was on the way and just under a week ago they delivered on their promise, releasing an API for Google+:

Developers have been waiting since late June for Google to release their API to the public.  Well, today is that Day.  Just a few minute ago Chris Chabot, from Google+ Developer Relations, announced that the Google+ API is now available to the public. The potential for this is huge, and will likely set Google+ on a more direct path towards social networking greatness. We should see an explosion of new applications and websites emerge in the Google+ community as developers innovate, and make useful tools from the available API. The Google+ API at present provides read-only access to public data posted on Google+ and most of the Google+ API follows a RESTful API design, which means that you must use standard HTTP techniques to get and manipulate resources.

Like all their APIs the Google+ one is very well documented and even the majority of their client libraries have been updated to include the new API. Looking over the documentation it appears that there’s really only 2 bits of information available to developers at this point in time, those being public Profiles (People)  and activities that are public. Supporting these APIs is the OAuth framework so that users can authorize external applications so that they can access their data on Google+. In essence this is a read only API for things that were already publicly accessible which really only serves to eliminate the need to screen scrape the same data.

I’ll be honest, I’m disappointed in this API. Whilst there are some useful things you can do with this data (like syndicating Google+ posts to other services and reader clients) the things that I believe Google+ would be great at doing aren’t possible until applications can be given write access to my stream. Now this might just be my particular use case since I usually use Twitter for my brief broadcasts (which is auto-syndicated to Facebook) and this blog for longer prose (which is auto shared to Twitter) so my preferred method of integration would be to have Twitter post stuff to my Google+ feed. Because as it is right now my Google+ account is a ghost town compared to my other social networks simply because of the lack of automated syndication.

Of course I understand that this isn’t the final API, but even as a first attempt it feels a little weak.

Whilst I won’t go as far as to say that Google+ is dying there is data to suggest that the early adopter buzz is starting to wind down. Anecdotally my feed seems to mirror this trend with average time between posts on there being days rather than minutes it is on my other social networks. The API would be the catalyst required to bring that activity back up to those initial levels but I don’t think it’s capable of doing so in its current form. I’m sure that Google won’t be a slouch when it comes to releasing new APIs but they’re going to have to be quick about it if they want to stem the flood of inactivity.

I really want to use Google+, I really do it’s just that the lack of interoperability that keeps all my data out of it. I’m sure in the next couple months we’ll see the release of a more complete API that will enable me to use the service as I, and many others I feel, use our other social networking services. 

VMware vSphere 5: Technologically Awesome, Financially Painful.

I make no secret of the fact that I’ve pretty much built my career around a single line of products, specifically those from VMware. Initially I simply used their workstation line of products to help me through university projects that required Linux to complete but after one of my bosses caught wind of my “experience” with VMware’s products I was put on the fast line to become an expert in their technology. The timing couldn’t have been more perfect as virtualization then became a staple of every IT department I’ve had the pleasure of working with and my experience with VMware ensured that my resume always floated around near the top when it came time to find a new position.

In this time I’ve had a fair bit of experience with their flagship product now called vSphere. In essence it’s an operating system you can install on a server that lets you run multiple, distinct operating system instances on top of it. Since IT departments always bought servers with more capacity than they needed systems like vSphere meant they could use that excess capacity to run other, not so power hungry systems along side them. It really was a game changer and from then on servers were usually bought with virtualization being the key purpose in mind rather than them being for a specific system. VMware is still the leader in this sector holding an estimated 80% of the market and has arguably the most feature rich product suite available.

Yesterday saw the announcement of their latest product offering vSphere 5. From a technological standpoint it’s very interesting with many innovations that will put VMware even further ahead of their competition, at least technologically. Amongst the usual fanfare of bigger and better virtual machines and improvements to their current technologies vSphere 5 brings with it a whole bunch of new features aimed squarely at making vSphere the cloud platform for the future. Primarily these innovations are centred around automating certain tasks within the data centre, such as provisioning new servers and managing server loads including down to the disk level which wasn’t available previously. Considering that I believe the future of cloud computing (at least for government organisations and large scale in house IT departments) is a hybrid public/private model these improvements are a welcome change , even if I won’t be using them immediately.

The one place that VMware falls down and is (rightly) heavily criticized for is the price. With the most basic licenses costing around $1000 per core it’s not a cheap solution by any stretch of the imagination, especially if you want to take advantage of any of the advanced features. Still since the licencing was per processor it meant that you could buy a dual processor server (each with say, 6 cores) with oodles of RAM and still come out ahead of other virtualization solutions. However with vSphere 5 they’ve changed the way they do pricing significantly, to the point of destroying such a strategy (and those potential savings) along with it.

Licensing is still charged on a per-processor basis but instead of having an upper limit on the amount of memory (256GB for most licenses, Enterprise Plus gives you unlimited) you are now given a vRAM allocation per licence purchased. Depending on your licensing level you’ll get 24GB, 32GB or 48GB worth of vRAM which you’re allowed to allocate to virtual machines. Now for typical smaller servers this won’t pose much of a problem as a dual proc, 48GB RAM server (which is very typical) would be covered easily by the cheapest licensing. However should you exceed even 96GB of RAM, which is very easy to do, that same server will then require additional licenses to be purchased in order to be able to full utilize the hardware. For smaller environments this has the potential to make VMware’s virtualization solution untenable, especially when you put it beside the almost free competitor of Hyper-V from Microsoft.

The VMware user community has, of course, not reacted positively to this announcement. Whilst for many larger environments the problems won’t be so bad as the vRAM allocation is done at the data center level and not the server level (allowing over-allocated smaller servers to help out their beefier brethren) it does have the potential to hurt smaller environments especially those who heavily invested in RAM heavy, processor poor servers. It’s also compounded by the fact that you’ll only have a short time to choose to upgrade for free, thus risking having to buy more licenses, or abstain and then later have to pay an upgrade fee. It’s enough for some to start looking into moving to the competition which could cut into VMware’s market share drastically.

The reasoning behind these changes is simple: such pricing is much more favourable to a ubiquitous cloud environment than it is to the current industry norm for VMware deployments. VMware might be slightly ahead of the curve on this one however as most customers are not ready to deploy their own internal clouds with the vast majority of current cloud users being hosted solutions. Additionally many common enterprise applications aren’t compatible with VMware’s cloud and thus lock end users out of realising the benefits of a private cloud. VMware might be choosing to bite the bullet now rather than later in the hopes it will spur movement onto their cloud platform at a later stage. Whether this strategy works or not remains to be seen, but current industry trends are pushing very hard towards a cloud based future.

I’m definitely looking forward to working with vSphere 5 and there are several features that will definitely provide an immense amount of value to my current environment. The licensing issue, whilst I feel won’t be much of an issue, is cause for concern and whilst I don’t believe VMware will budge on it any time soon I do know that the VMware community is an innovative lot and it won’t be long before they work out how to make the best of this licensing situation. Still it’s definitely an in for the competition and whilst they might not have the technological edge they’re more than suitable for many environments.