Posts Tagged‘m910’

Configuring Dell Server BIOS Remotely Using WSMAN and WinRM.

I have the pleasure of configuring some Dell kit without the use of a pre-execution environment. This presents quite a challenge as many of the management tools are designed to run within such an environment or an installed operating system which means that my options for configuring these serves is somewhat limited. Thankfully for most of the critical stuff Dell’s RACADM tool is more than capable of managing the server remotely however it unfortunately doesn’t have any access to the system BIOS where some critical changes need to be made. Thus I was in need of finding a solution to this problem and it seems that my saviour comes in the form of a protocol called Web Services Management (WSMAN).

WSMAN is an open protocol for server management which provides a rather feature rich interface to your hardware for getting, setting and enumerating the various features and settings on your hardware. Of course since its so powerful it’s also rather complex in nature and you won’t really be able to stumble your way through it without the help of a vendor specific guide. For Dell servers the appropriate guide is the Lifecycle Controller Web Services Interface Guide (there’s an equivalent available for Linux) which gives you a breakdown of the commands that are available and what they can accomplish.

They’re not fully documented however so I thought I’d show you a couple commands I’ve used in order to configure some BIOS settings on one of the M910 blades I’m currently working on. The first requirement was to disable all the on board NICs as we want to use the Qlogic QME8262-k 10GB NICs instead. In order to do this however we first need to get some information out of the WSMAN interface in order to know which variables to change. The first command you’ll want to run is the following:

winrm e http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSEnumeration
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic

Which will give you a whole bunch of output along these lines:

DCIM_BIOSEnumeration
    AttributeName = EmbNic1Nic2
    Caption
    CurrentValue = Enabled
    DefaultValue
    Description
    ElementName
    FQDD = BIOS.Setup.1-1
    InstanceID = BIOS.Setup.1-1:EmbNic1Nic2
    IsOrderedList
    IsReadOnly = FALSE
    PendingValue
    PossibleValues = Disabled, Enabled

DCIM_BIOSEnumeration
    AttributeName = EmbNic1
    Caption
    CurrentValue = EnabledPxe
    DefaultValue
    Description
    ElementName
    FQDD = BIOS.Setup.1-1
    InstanceID = BIOS.Setup.1-1:EmbNic1
    IsOrderedList
    IsReadOnly = FALSE
    PendingValue
    PossibleValues = Disabled, EnablediScsi, EnabledPxe, Enabled

Of note in the output are the AttributeName and PossibleValues variables. In essence these represent the current and possible states of the BIOS variables and all of them can be modified through the appropriate WSMAN command. The Dell guide I referenced earlier though doesn’t exactly tell you how to do this and the only example that appears to be close is one for modifying the BIOS boot mode setting. However as it turns out this same command can be used to modify any variable that is output by the previous command so long as you create the appropriate XML file. Shown below is the command and XML file to disable the first 2 embedded NICs:

Code:
winrm i SetAttribute http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService?SystemCreationClassName=DCIM_ComputerSystem
+CreationClassName=DCIM_BIOSService
+SystemName=DCIM:ComputerSystem+Name=DCIM:BIOSService
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic -file:SetAttribute_BIOS.xml

SetAttribute_BIOS.xml:
<p:SetAttribute_INPUT xmlns:p="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService">
<p:Target>BIOS.Setup.1-1</p:Target>
<p:AttributeName>EmbNic3Nic4</p:AttributeName>
<p:AttributeValue>Disabled</p:AttributeValue>
</p:SetAttribute_INPUT>

This appears to work quite well for individual attributes but I’ve encountered errors when trying to set more than one BIOS variable at a time. This could easily be due to me fat fingering the input file (I didn’t really check it before troubleshooting it further) but it could also be a limitation of the WSMAN implementation on the Dell servers. Either way once you’ve run that command you’ll notice the response from the server states that the values are pending and the server requires a reboot. Now I’m not 100% sure if you can get away with just rebooting it through the iDRAC or physically rebooting it but there is a WSMAN command which I can guarantee will apply the setting whilst also rebooting the server for you. Again this one relies on an XML file for it to succeed:

Code:
winrm i CreateTargetedConfigJob http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService?SystemCreationClassName=DCIM_ComputerSystem
+CreationClassName=DCIM_BIOSService
+SystemName=DCIM:ComputerSystem
+Name=DCIM:BIOSService
-u:root -p:calvin
-r:https://[iDRACIP]/wsman -SkipCNcheck -SkipCAcheck
-encoding:utf-8 -a:basic -file:CreateTargetedConfigJob_BIOS.xml

CreateTargetedConfigJob_BIOS.xml:
<p:CreateTargetedConfigJob_INPUT xmlns:p="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/root/dcim/DCIM_BIOSService">
<p:Target>BIOS.Setup.1-1</p:Target>
<p:RebootJobType>2</p:RebootJobType>
<p:ScheduledStartTime>TIME_NOW</p:ScheduledStartTime>
<p:UntilTime>20131111111111</p:UntilTime>
</p:CreateTargetedConfigJob_INPUT>

Upon executing this command the server will reboot and then load into the Lifecycle Controller where it will apply the desired settings. After which it will reboot again and you’ll be able to view the settings inside the BIOS proper. It appears that this command can be used for any variable that appears within the initial BIOS enumeration so using this it is quite possible to fully configure the BIOS remotely. You can also access quite a lot of things within the iDRAC itself however I’ve found that RACADM is a much easier way to go about this, especially if you simply dump the entire config, edit it, then reupload it. Still the option is there if you want to use the single tool but unless you’re something of a masochist I wouldn’t recommend doing everything through WSMAN.

All that being said however the WSMAN API appears to cover pretty much everything in the server so if you need to do something remotely to it (hardware wise) and you don’t have the luxury of a PXE or installed operating system than its definitely something to look into. Hopefully the above commands will get you started and then the rest of the Dell integration guide will make a little more sense. If you’ve got any questions about a particular command hit me up in the comments, on Twitter or on my Facebook fan page and I’ll help you out as much as I can.

Fusion-IO’s ioDrive Comparison: Sizing up Enterprise Level SSDs.

Of all the PC upgrades that I’ve ever done in the past the one that’s most notably improved performance of my rig is, by a wide margin, installing a SSD. Whilst good old fashioned spinning rust disks have come a long way in recent years in terms of performance they’re still far and away the slowest component in any modern system. This is what chokes most PC’s performance as the disk is a huge bottleneck, slowing everything down to its pace. The problem can be mitigated somewhat by using several disks in a RAID 0 or RAID 10 set but all of those pale in comparison when compared to even a single SSD.

The problem doesn’t go away for the server environment either, in fact most of the server performance problems I’ve diagnosed have had their roots in poor disk performance. Over the years I’ve discovered quite a few tricks to get around the problems presented by traditional disk drives but there are just some limitations you can’t overcome. Recently at work the issue of disk performance came to a head again as we investigated the possibility of using blade servers in our environment. I casually made mention of a company that I had heard of a while back, Fusion-IO, who specialised in making enterprise class SSDs. The possibility of using one of the Fusion-IO cards as a massive cache for the slower SAN disk was a tantalizing prospect and to my surprise I was able to snag an evaluation unit in order to put it through its paces.

The card we were sent was one of the 640GB ioDrives. It’s surprising heavily for its size, sporting gobs of NAND flash and a massive heat sink that hides the propeitary c ontroller. What intrigued me about the card initially was the NAND didn’t sport any branding I recognised before (usually its recognisable like Samsung) but as it turns out each chip is a 128GB Micron NAND Flash chip. If all that storage was presented raw it would total some 3.1 TB and this is telling of the underlying infrastructure of the Fusion-IO devices.

The total storage available to the operating system once this card is installed is around 640GB (600GB usable). Now to get that kind of storage out of the Micron NAND chips you’d only need 5 of them but the ioDrive comes with a grand total of 25 dotting the board. No traditional RAID scheme can account for the amount of storage presented. So based on the fact that there’s 25 chips and only 5 chips worth of capacity available it follows that the Fusion-IO card uses quintuplet sets of chips to provide the high level of performance that they claim. That’s an incredible amount of parallelism and if I’m honest I expected these chips to all be 256MB chips that were all RAID 1 to make one big drive.

Funnily enough I did actually find some Samsung chips on this card, two 1GB DDR2 chips. These are most likely used for the CPU on the ioDrive which has a front side bus of either 333 or 400MHz based on the RAM speed.

But enough of the techno geekery, what’s really important is how well this thing performs in comparison to traditional disks and whether or not it’s worth the $16,000 price tag that comes along with it. Now I had done some extensive testing of various systems in the past in order to ascertain whether the new Dell servers we were looking at where going to perform as well as their HP counterparts. All of this testing was purely disk based using IOMeter, a disk load simulator that tests and reports on nearly every statistic you want to know about your disk subsystem. If you’re interested in replicating the results I’ve got then I’ve uploaded a copy of my configuration file here. The servers included in the test are Dell M610x, Dell M710HD, Dell M910, Dell R710 and a HP DL380G7. For all the tests (bar the two labelled local install) all of them are a base install of ESXi 5 with a Windows 2008R2 virtual machine installed on top of it. The specs of the virtual machine are 4 vCPUs, 4GB RAM and a 40GB disk.

As you can see the ioDrive really is in a class all of its own. The only server that comes close in terms of IOPS is the M910 and that’s because it’s sporting 2 Samsung SSDs in RAID 0. What impresses me most about the ioDrive though is its random performance which manages to stay quite high even as the block size starts to get bigger. Although its not shown in these tests the one area where the traditional disks actually equal the Fusion-IO is in terms of throughput when you get up to really large write sizes, on the order of 1MB or so. I put this down to the fact that the servers in question, the R710s and DL380G7s, have 8 disks in them that can pump out some serious bandwidth when they need to. If I had 2 Fusion-IO cards though I’m sure I could easily double that performance figure.

What interested me next was to see how close I could get to the spec sheet performance. The numbers I just showed you are particularly incredible but Fusion-IO claims that this particular drive was capable of something on the order of 140,000 IOPS if I played my cards correctly. Using the local install of Windows 2008 I had on there I fired up IOMeter again and set up some 512B tests to see if I could get close to those numbers. The results, as shown in the Dell IO contoller software, are shown below:

Ignoring the small blip in the centre where I had to restart the test you can see that whilst the ioDrive is capable of some pretty incredible IO the advertised maximums are more than likely theoretical than practical. I tried several different tests and while a few averaged higher than this (approximately 80K IOPS was my best) it was still a far cry from the figures they have quoted. Had they gotten within 10~20% I would’ve given it to them but whilst the ioDrive’s performance is incredible it’s not quite as incredible as the marketing department would have you believe.

As a piece of hardware the Fusion-IO ioDrive is really the next step up in terms of performance. The virtual machines I had running directly on the card were considerably faster than their spinning rust counterparts and if you were in need of some really crazy performance you really couldn’t go past one of these cards. For the purpose we had in mind for it however (putting it inside a M610x blade) I can’t really recommend it as it’s a full height blade that only has the power of a half height. The M910 represents much better value with its crazy CPU and RAM count and the SSDs, whilst being far from Fusion-IO level, do a pretty good job of bridging the disk performance gap. I didn’t have enough time to see how it would improve some real world applications (it takes me longer than 10 days to get something like this into our production environment) but based on these figures I have no doubt it improve the performance of whatever I put it into considerably.