Posts Tagged‘dell’

Configuring Dell Server BIOS Remotely Using WSMAN and WinRM.

I have the pleasure of configuring some Dell kit without the use of a pre-execution environment. This presents quite a challenge as many of the management tools are designed to run within such an environment or an installed operating system which means that my options for configuring these serves is somewhat limited. Thankfully for most of the critical stuff Dell’s RACADM tool is more than capable of managing the server remotely however it unfortunately doesn’t have any access to the system BIOS where some critical changes need to be made. Thus I was in need of finding a solution to this problem and it seems that my saviour comes in the form of a protocol called Web Services Management (WSMAN).

WSMAN is an open protocol for server management which provides a rather feature rich interface to your hardware for getting, setting and enumerating the various features and settings on your hardware. Of course since its so powerful it’s also rather complex in nature and you won’t really be able to stumble your way through it without the help of a vendor specific guide. For Dell servers the appropriate guide is the Lifecycle Controller Web Services Interface Guide (there’s an equivalent available for Linux) which gives you a breakdown of the commands that are available and what they can accomplish.

They’re not fully documented however so I thought I’d show you a couple commands I’ve used in order to configure some BIOS settings on one of the M910 blades I’m currently working on. The first requirement was to disable all the on board NICs as we want to use the Qlogic QME8262-k 10GB NICs instead. In order to do this however we first need to get some information out of the WSMAN interface in order to know which variables to change. The first command you’ll want to run is the following:

winrm e http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSEnumeration
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic

Which will give you a whole bunch of output along these lines:

DCIM_BIOSEnumeration
    AttributeName = EmbNic1Nic2
    Caption
    CurrentValue = Enabled
    DefaultValue
    Description
    ElementName
    FQDD = BIOS.Setup.1-1
    InstanceID = BIOS.Setup.1-1:EmbNic1Nic2
    IsOrderedList
    IsReadOnly = FALSE
    PendingValue
    PossibleValues = Disabled, Enabled

DCIM_BIOSEnumeration
    AttributeName = EmbNic1
    Caption
    CurrentValue = EnabledPxe
    DefaultValue
    Description
    ElementName
    FQDD = BIOS.Setup.1-1
    InstanceID = BIOS.Setup.1-1:EmbNic1
    IsOrderedList
    IsReadOnly = FALSE
    PendingValue
    PossibleValues = Disabled, EnablediScsi, EnabledPxe, Enabled

Of note in the output are the AttributeName and PossibleValues variables. In essence these represent the current and possible states of the BIOS variables and all of them can be modified through the appropriate WSMAN command. The Dell guide I referenced earlier though doesn’t exactly tell you how to do this and the only example that appears to be close is one for modifying the BIOS boot mode setting. However as it turns out this same command can be used to modify any variable that is output by the previous command so long as you create the appropriate XML file. Shown below is the command and XML file to disable the first 2 embedded NICs:

Code:
winrm i SetAttribute http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService?SystemCreationClassName=DCIM_ComputerSystem
+CreationClassName=DCIM_BIOSService
+SystemName=DCIM:ComputerSystem+Name=DCIM:BIOSService
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic -file:SetAttribute_BIOS.xml

SetAttribute_BIOS.xml:
<p:SetAttribute_INPUT xmlns:p="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService">
<p:Target>BIOS.Setup.1-1</p:Target>
<p:AttributeName>EmbNic3Nic4</p:AttributeName>
<p:AttributeValue>Disabled</p:AttributeValue>
</p:SetAttribute_INPUT>

This appears to work quite well for individual attributes but I’ve encountered errors when trying to set more than one BIOS variable at a time. This could easily be due to me fat fingering the input file (I didn’t really check it before troubleshooting it further) but it could also be a limitation of the WSMAN implementation on the Dell servers. Either way once you’ve run that command you’ll notice the response from the server states that the values are pending and the server requires a reboot. Now I’m not 100% sure if you can get away with just rebooting it through the iDRAC or physically rebooting it but there is a WSMAN command which I can guarantee will apply the setting whilst also rebooting the server for you. Again this one relies on an XML file for it to succeed:

Code:
winrm i CreateTargetedConfigJob http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService?SystemCreationClassName=DCIM_ComputerSystem
+CreationClassName=DCIM_BIOSService
+SystemName=DCIM:ComputerSystem
+Name=DCIM:BIOSService
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic -file:CreateTargetedConfigJob_BIOS.xml

CreateTargetedConfigJob_BIOS.xml:
<p:CreateTargetedConfigJob_INPUT xmlns:p="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService">
<p:Target>BIOS.Setup.1-1</p:Target>
<p:RebootJobType>2</p:RebootJobType>
<p:ScheduledStartTime>TIME_NOW</p:ScheduledStartTime>
<p:UntilTime>20131111111111</p:UntilTime>
</p:CreateTargetedConfigJob_INPUT>

Upon executing this command the server will reboot and then load into the Lifecycle Controller where it will apply the desired settings. After which it will reboot again and you’ll be able to view the settings inside the BIOS proper. It appears that this command can be used for any variable that appears within the initial BIOS enumeration so using this it is quite possible to fully configure the BIOS remotely. You can also access quite a lot of things within the iDRAC itself however I’ve found that RACADM is a much easier way to go about this, especially if you simply dump the entire config, edit it, then reupload it. Still the option is there if you want to use the single tool but unless you’re something of a masochist I wouldn’t recommend doing everything through WSMAN.

All that being said however the WSMAN API appears to cover pretty much everything in the server so if you need to do something remotely to it (hardware wise) and you don’t have the luxury of a PXE or installed operating system than its definitely something to look into. Hopefully the above commands will get you started and then the rest of the Dell integration guide will make a little more sense. If you’ve got any questions about a particular command hit me up in the comments, on Twitter or on my Facebook fan page and I’ll help you out as much as I can.

The Cloud Wars Are About to Begin.

With virtualization now being as much of as a pervasive idea in the datacentre as storage array networks or under floor cooling the way has been paved for the cloud to make its way there as well for quite some time now. There are now many commercial off the shelf solutions that allow you to incrementally implement the multiple levels of the cloud (IaaS -> PaaS -> SaaS) without the need for a large operational expenditure in developing the software stack at each level. The differentiation now comes from things like added services, geographical location and pricing although even that is already turning into a race to the bottom.

The big iron vendors (Dell, HP, IBM) have noticed this and whilst they could still sustain their current business quite well by providing the required tin to the cloud providers (the compute power is shifted, not necessarily reduced) they’re all starting to look to creating their own cloud solutions so that they can continue to grow their business. I covered HP’s cloud solution last week after the HP Cloud Tech day but recently there’s been a lot of news coming out regarding the other big players, both from the old big iron world and the more recently established cloud providers.

First cab off the rank I came across was Dell who are apparently gearing up to make a cloud play. Now if I’m honest that article, whilst it does contain a whole lot of factual information, felt a little speculative to me mostly because Dell hasn’t tried to sell me on the cloud idea when I’ve been talking to them recently. Still after doing a small bit of research I found that not only are Dell planning to build a global network of datacentres (where global usually means everywhere but Australia) they announced plans to build one in Australia just on a year ago. Combining this with their recent acquisition spree that included companies like Wyse it seems highly likely that this will be the backbone of their cloud offering. What that offering will be is still up for speculation however, but it wouldn’t surprise me if it was yet another OpenStack solution.

Mostly because RackSpace, probably the second biggest general cloud provider behind Amazon Web Services, just announced that their cloud will be compatible with the OpenStack API. This comes hot off the heels of another announcement that both IBM and RedHat would become contributers to the OpenStack initiative although no word yet on whether they have a view to implement the technology in the future. Considering that both HP and Dell have are already showing their hands with their upcoming cloud strategies it would seem like becoming OpenStack contributers will be the first step to seeing some form of IBM cloud. They’d be silly not to given their share of the current server market.

Taking all of this into consideration it seems that we’re approaching a point of convergence in the cloud computing industry. I wrote early last year that one of the biggest draw backs to the cloud was its proprietary nature and it seems like the big iron providers noticed that this was a concern. The reduction of vendor lock lowers the barriers to entry for many customers significantly and provides a whole host of other benefits like being able to take advantage of disparate cloud providers to provide service redundancy. As I said earlier the differentiation between providers will then predominately come from value-add services, much like it did for virtualization in the past.

This is the beginning of the cloud war, where all the big players throw their hats into the ring and duke it out for our business. It’s a great thing for both businesses and consumers as the quality of products will increase rapidly and the price will continue on a down hill trend. It’s quite an exciting time, one akin to the virtualization revolution that started happening almost a decade ago. Like always I’ll be following these developments keenly as the next couple years will be something of a proving ground for all cloud providers.

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably. 

Google Swings Its Proverbial, China Hits Back.

Ever since my last post on the whole Google vs China situation I’ve steered clear of jumping into the fray again. That’s not for lack of material though especially when Google took the impressive step of shutting down its China servers and redirecting all google.cn traffic to their Hong Kong (which are politically and legally isolated from mainland China) servers which put the ball square back in China’s court. I knew it wouldn’t be long before the Chinese government retaliated and I expected that they would do much the same as they have done to other services that don’t follow their rules, I.E. block them outright. Especially when they accused them of being spies.

It seems however that the situation is a little bit more complicated than that:

The Chinese government has attempted to restrict access to the Hong Kong–based servers where Google is offering uncensored search results to mainland China users.

On Tuesday, according to The New York Times, mainland China users could not see uncensored Hong Kong–based content after the government either disabled certain searches or blocked links to results.

Citing business executives “close to industry officials,” The Times also reports that China Mobile – the country’s largest wireless carrier – is under pressure from the government to cancel a pact with Google that puts the web giant’s search engine on the carrier’s mobile home page. The carrier is expected to end the pact – though it doesn’t have an agreement in place with a new search provider.

The Chinese government isn’t stupid and they know that blocking Google outright would just fan the fire that’s swelling up against them. Instead they’ve curtailed the uncensored search engine as best they could to match how it worked previously, leaving the switch to the Hong Kong servers mostly transparent to the less tech savvy amongst its residents (who really wouldn’t have been bothered by the initial censoring anyway). What does come as a surprise is the reaction of the government towards Google’s other businesses which seems to be their way of strong-arming Google back into place.

Initially Google signed on to censor its results as it thought that at least having some presence in China was better than none at all. Whilst the shareholders were unanimously for the move (come on, who wouldn’t want their company to make more money?) they copped a beating from their critics who trotted out their corporate motto of “Don’t Be Evil” as a sticking point for bowing to the Chinese Government’s will. It was well founded as many felt capitulating implied some level of support for the government’s activities which, even at the best of times, been highly questionable to observers. Even more interesting is that the same critics also threw a bit of flak Google’s way for pulling out of China, as it provided them with vindication of their initial stance.

Google didn’t make this decision lightly. Ever since their initial scuffle and rebellion against the Chinese government Google’s shares have taken a whopping 6% hit since January. From a business perspective they would have to judge this (hopefully) short term damage to their stock price as less than what continuing business in China would have done to them, which is saying quite a lot. They’re far from shutting down all of their operations within China’s borders but pulling the plug on their biggest asset shows that they aren’t keen to play games with the Chinese government anymore, despite the damage it will do to its bottom line.

I made the prediction that should Google pull out of China many companies would begin to follow suit. At the time I was really only focused on Internet based companies, as they’re the ones who struggle the most within China’s borders. As it turns out I was right as the domain name giant GoDaddy is discontinuing its services to the region:

GoDaddy.com Inc., the world’s largest domain name registration company, told lawmakers Wednesday that it will cease registering Web sites in China in response to intrusive new government rules that require applicants to provide extensive personal data, including photographs of themselves.

The rules, the company believes, are an effort by China to increase monitoring and surveillance of Web site content and could put individuals who register their sites with the firm at risk. The company also believes the rules will have a “chilling effect” on new domain name registrations.

GoDaddy’s move follows Google’s announcement Monday that it will no longer censor search results on its site in China.

It’s not only pure Internet companies that are looking for solutions to the China problem, Dell has also begun looking to other less restrictive regions as well. So whilst there isn’t a mass exodus of all western based companies from China there is mounting pressure that such companies aren’t willing to deal with the government’s regulations in order to do business there. Honestly I wouldn’t of expected such moves from either company as Dell makes its money on the volumes it moves (provided in the most part by China) and GoDaddy isn’t renowned as a bastion of corporate morals, but they do have the freedom of not being controlled by share holders. Still if two large multi-national companies are willing to throw their weight behind Google you’ve got to wonder how long other companies will put up with China’s restrictive market.

Hopefully enough big names will jump on the Google bandwagon and we’ll begin to see China’s government rethink its restrictive stance. I’m not naive though and I know it will take a lot of pressure for the Chinese government to make any concessions for western companies looking to make a name for themselves on China’s shores. However what we’re seeing now are the opening chapters to a book that still has many pages to be written, and has a long time to go until it’s published to the wider world.