Boot From SAN on RedHat with PowerPat...

Boot From SAN on RedHat with PowerPath and EMC Clarion

Boot From SAN with LVM and Multipath

SEE http://www.thogan.com/site/index.php?option=com_content&view=article&id=5:ubuntu-multipath-boot-from-san-experiment&catid=2:uncatagorized&Itemid=2 for information on our experience with Ubuntu :)

 

Before getting started, you will need to make sure that you have to proper installation materials, and that the SAN configuration is appropriately setup for a system install.

 

Install Media

 

RHEL 4, Update 6 (RHEL 4.6) or RHEL 5.  Earlier versions of RedHat, including earlier update versions, have an improperly functioning QLogic driver.  Use this specific installation media for this document.  Also, depending on the version of the QLogic driver, the SAN devices may be laid out before or after the local storage, use fdisk and look at the volume sizes to identify the local storage and remember which it is.

 

SAN Configuration

 

One path to the SAN.  There cannot be multiple paths to the SAN during an install as it will cause problems with mounting /boot and finding the LVM partitions.  The system must be booted in order to correct the configuration, so you must perform the install with only one path configured.  Once the system boots, the appropriate adjustments can be made to fstab and the LVM to allow the system to boot properly with multiple paths.

 

Location of SAN Boot Card

 

You must know in which PCI slot the HBA that you will be booting off resides.  You will need to configure the BIOS to boot from here.  You must also make sure that this is the card with the active path, and you will need to configure that specific card to have boot enabled.

 

Three things that need to line up:

 

BIOS boot device = HBA w/active path = HBAconfigured to boot

 

BIOS Configuration

 

This section is written based on an installation on IBM x86 hardware.  If you are using another platform these menus may be different.

 

Setting The Boot Device

 

Boot the system and enter the system BIOS.  You will need to make sure that the SAN card is a valid boot device.

 

Select “Start Options”

Go To “PCI Device Boot Priority”

                Modify this field to reflect the PCI slot number in which the boot HBA resides.

Go To “Startup Sequence Options”

                Under “Primary Startup Sequence”, set the four devices as follows:[1]

                “CD ROM”

                “Hard Disk 0”

                “Hard Disk 1”

                “Network”

Escape back to the main menu.

Select “Save Settings” then “Exit Setup”

 

Configuring the HBA

 

The HBA will now need to be configured to be bootable.  On the next boot, enter the HBA BIOS.  This document was written against QLogic 2460 HBAs.  If you are using a different HBA, the process may vary.

 

Enter the BIOS with a <CTRL-Q> when prompted.

Select the adapter with the active path (also should be the slot configured for boot in the BIOS)

Select “Configuration Settings”

Select “Adapter Settings”

                Set “Host Adapter BIOS” to “Enabled”

Return to the previous menu.

Select “Selectable Boot Settings”

                Set “Selectable Boot” to “Enabled”

                Set each boot device by selecting the field, pressing Enter, then selecting a LUN.

Escape back to the main menu, and select “Save Changes” when prompted.

Select “Select Host Adapter”

Select the other adapter this time (the NON boot one)

Repeat the process as with the first adapter, EXCEPT:

                Disable the Host Adapter BIOS”

                Disable “Selectable Boot”

Escape to the main menu and save changes again.

Exit the utility and reboot the system.

 

Starting the Linux Install

 

Have the appropriate RedHat media in the optical drive and boot the system.  Boot to the default graphical install.  Watch when the “Loading SCSI Drivers” screen appears, you should see the module for the HBAs get loaded.  For the QLogic cards, this is qla2xxx or qla2400.

 

Once the graphical installer is fully started and prompting you to click next to begin, switch to the terminal by pressing “CTRL-ALT-F2”.

 

At the console, enter “ls /dev/sd*”.  You should see at least /dev/sda and /dev/sdb.  There may be more.  Identify the SAN and local devices.  The local device will usually be /dev/sda.  You can test this by entering “fdisk /dev/sda”, then at the menu enter “p” to print the partition table.  It will also tell you the size of the volume.  Look for a size that indicated a SAN LUN or local storage and remember which devices are which.

 

Addendum to Standard Linux Build – Partitioning

 

The name of the volume group created on the SAN device should be “sanvg”.  The /boot partition should be create on the SAN device as well.

 

Continue with the install from this point as described in “Standard Linux Build”.

 

First Boot After Install

 

The first boot of the system after installation will likely FAIL.  This is normal, as the installer did not choose the appropriate boot device when installing GRUB.  To boot the system you will need to modify the GRUB commands.

 

After you are informed of the failed boot, hit enter to get the GRUB menu.

 

OH NO!  GRUB comes up and the screen is all wiggedy wack!  Read Appendix A at the end of the document for help!

 

With the first boot option selected, press “e” for edit.

                The first line in the next menu should be something like “root (hd1,0)”.

                Press “e” to edit this line.

                                Change the line to read “root (hd0,0)”

                                Hit enter to accept your changes

                Press “b” to boot the system with the modified commands.

 

Later in this document we will edit grub.conf to permanently make this modification.
 
If you see GRUB in upper left of screen after reboot:
Grub may fail to install to the correct path so it may be necessary to bootup from the DVD/CD in rescue mode using linux rescue at the promt and then performing a grub install as follows:
    chroot /mnt/sysimage
    grub-install /dev/sdb

 

Install EMC PowerPath

 

The PowerPath software will perform failover functions as well as create special /dev devices allowing unambiguous access to the active path.

 

Fetch the install archive EMCpower.LINUX-5.1.2.00.00-021.tar.gz and extract it.  Then use rpm to install the appropriate package onto the system:

 

Verify EMC PowerPath Install

 

PowerPath should now be installed.  To verify, type “lsmod | grep emc”  You should see a lot of modules with names beginning with emc.  This indicates that PowerPath has loaded successfully.

 

Start PowerPath with its init script.  Afterward you should see it coalesce the available paths to the SAN into a new virtual device.  Verify that this is your SAN device by reading the partition table with fdisk.

 

[root@ ~]# service PowerPath start

Starting PowerPath:  done

[root@ ~]# ls /dev/emcpower*

/dev/emcpower  /dev/emcpowera  /dev/emcpowera1  /dev/emcpowera2

 

As you can see above, there are now devices for /dev/emcpowera, a block device representing the SAN which is backed by /dev/sdb - /dev/sde.

 

[root@ ~]# fdisk /dev/emcpowera

 

The number of cylinders for this disk is set to 9137.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

 

Command (m for help): p

 

Disk /dev/emcpowera: 75.1 GB, 75161927680 bytes

255 heads, 63 sectors/track, 9137 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

         Device Boot      Start         End      Blocks   Id  System

/dev/emcpowera1   *           1          19      152586   83  Linux

/dev/emcpowera2              20        9137    73240335   8e  Linux LVM

 

Command (m for help): q

 

A quick run of fdisk above shows that this is definitely our SAN volume.  The boot partition /dev/sdb1 is now available as /dev/emcpowera1.  
 

Modify modprobe.conf

 

At the end of /etc/modprobe.conf add the following line:

 

options scsi_mod max_scsi_luns=256

 

 

Modify grub.conf

 

Open the file and make the following edits:

 

Change any occurrence of “(hd*,0)” to “(hd0,0)”. (Where * is any number that is not 0).

 

On any line that starts with kernel, remove “rhgb quiet” from the end of it.

 

Comment out the “hiddenmenu” option with a “#” at the start of the line.

 

When you are finished, the file should look something like this:

 

# grub.conf generated by anaconda

#

# Note that you do not have to rerun grub after making changes to this file

# NOTICE:  You have a /boot partition.  This means that

#          all kernel and initrd paths are relative to /boot/, eg.

#          root (hd1,0)

#          kernel /vmlinuz-version ro root=/dev/sanvg/rootlv

#          initrd /initrd-version.img

#boot=/dev/sda

default=0

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

#hiddenmenu

title Red Hat Enterprise Linux AS (2.6.9-67.ELsmp)

        root (hd0,0)

        kernel /vmlinuz-2.6.9-67.ELsmp ro root=/dev/sanvg/rootlv

        initrd /initrd-2.6.9-67.ELsmp.img

title Red Hat Enterprise Linux AS-up (2.6.9-67.EL)

        root (hd0,0)

        kernel /vmlinuz-2.6.9-67.EL ro root=/dev/sanvg/rootlv

        initrd /initrd-2.6.9-67.EL.img

 

Modify the LVM Config

 

Finally, you must modify the LVM config file in /etc/lvm/lvm.conf to ignore the raw paths to the SAN and only use the PowerPath devices.

 

Find the line that sets up the default filter:

 

filter = [ "a/.*/" ]

 

Comment it out with a “#” at the start of the line, then put in the following line to tell LVM to only look at the emcpower devices and local storage:

 

filter = [ "a/sda/", "a/emcpower/", "r/.*/" ]

 

This is assuming that /dev/sda is local storage, you may have to modify this line if another device is local storage.

 

To make sure that the filter is working, run “vgscan” and verify that there are no messages about a “Duplicate PV”.

 

[root@mnsvliapp003 ~]# vgscan

  Reading all physical volumes.  This may take a while...

  Found volume group "sanvg" using metadata type lvm2

 

Setting Failover Policy

 

The appropriate failover policy will need to be set depending on the type of SAN.  Up to this point, only one path to each service processor should show as “active”, the rest show a state of “unlic”.  Running “powermt display dev=all” will show this information:

 
If the PowerPath license has not been installed do so with:
    emcpreg --install
 

[root@~]# powermt display dev=all

Pseudo name=emcpowera

CLARiiON ID=APM00064800054 [prod_jboss1]

Logical device ID=60060160A9D01A00A2AD9882F5ACDC11 [prod_jboss1_lun20]

state=alive; policy=BasicFailover; priority=0; queued-IOs=0

Owner: default=SP A, current=SP A

==============================================================================

---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---

### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors

==============================================================================

   1 qla2xxx                   sdb       SP A4     active  alive      0      0

   1 qla2xxx                   sdc       SP B5     active  alive      0      0

   2 qla2xxx                   sdd       SP A5     unlic   alive      0      0

   2 qla2xxx                   sde       SP B4     unlic   alive      0      0

 

For a CLARiiON array, issue the following command to set the failover policy to “CLARiiON Optimal”.  this will cause all other paths to become active.  You will then need to save the configuration, and it will then persist across reboots.

 

[root@ ~]# powermt set policy=co

[root@ ~]# powermt display dev=all

Pseudo name=emcpowera

CLARiiON ID=APM00064403323 [dr_epicdb]

Logical device ID=600601602E811900C8E4B43C79AADC11 [dr_epicdb_LUN_100]

state=alive; policy=CLAROpt; priority=0; queued-IOs=0

Owner: default=SP A, current=SP A

==============================================================================

---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---

### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors

==============================================================================

   1 qla2xxx                   sdb       SP B4     active  alive      0      0

   1 qla2xxx                   sdc       SP A5     active  alive      0      0

   2 qla2xxx                   sdd       SP B4     active  alive      0      0

   2 qla2xxx                   sde       SP A5     active  alive      0      0

 

Error displaying HBAs and associated devices.

 

[root@ ~]# powermt save

 

CABLE PULL TEST

 

At this point in the document, the configuration should be correct to survive a cable pull test.  If the system cannot recover from the I/O errors after a cable pull at this point, something is wrong with the configuration.  Review all steps and ensure that the output from the diagnostic commands is consistent with what is documented here.

 

Finishing Up

 

The system should be configured to boot and handle multiple paths now.  Have the extra paths configured on the SAN then reboot the system.

 

During the system startup, PowerPath may report failure to start.  This is fine, all that failed was the module load, which is because the modules were already loaded in the initrd.

 

Checking the PowerPath Configuration

 

PowerPath should now see all the active paths to the storage.  To verify this, run the command “powermt display dev=all”.  This should return the expected number of paths and show what raw devices are backing each path.

 

[root@ ~]# powermt display dev=all

Pseudo name=emcpowera

CLARiiON ID=APM00064403323 [dr_epicdb]

Logical device ID=600601602E811900C8E4B43C79AADC11 [dr_epicdb_LUN_100]

state=alive; policy=CLAROpt; priority=0; queued-IOs=0

Owner: default=SP A, current=SP A

==============================================================================

---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---

### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors

==============================================================================

   1 qla2xxx                   sdb       SP B4     active  alive      0      0

   1 qla2xxx                   sdc       SP A5     active  alive      0      0

   2 qla2xxx                   sdd       SP B4     active  alive      0      0

   2 qla2xxx                   sde       SP A5     active  alive      0      0

 

Error displaying HBAs and associated devices.

 

Appendix A – GRUB Problems on IBM

 

On some of the IBM x86 hardware, when going into the GRUB menu after a failed boot, the screen goes berserk and it is difficult to read.  The menu gets somewhat broken too, but it is still possible to modify the GRUB commands and boot the system:

 

When selecting the “root (hd1,0)” line, after you hit “e” to edit the line, the line you are presented with in the editor reads “initrd /init”, NOT “root (hd1,0)”.  You CANNOT edit this line, follow this process:

 

Hit enter to accept the weird line.  Then press “b” to boot the system.  IT WILL FAIL AGAIN.  This is fine, now hit “e” to edit the line again, and this time you should be presented with the correct line.  Make the modifications described in “First Boot After Install”, and again press “b”.  This time, the system should boot.

 

The screen will return to normal after RedHat startup loads the font files.



[1]               Many BIOSes have an option for “PCI” or “Additional Boot Devices”, or even names the HBA.  If this is the case on the target system, use that selection instead of “Hard Disk”.  On the IBM hardware the PCI boot device magically becomes Hard Disk 0 or 1 in the boot order, so make sure they are both in there.  Boot from SAN may fail if there are bootable partitions on ANY local storage device.

 
To upgrade the Kernel:

Move /etc/init.d/PowerPath to /root.

Comment out references to PowerPath pseudo (emcpower?) devices from system configuration files such as /etc/fstab and /etc/lvm/lvm.conf.

Reboot the machine.

Stop the Navisphere agent (CLARiiON only)
# /etc/init.d/naviagent stop
 
Stop the ECC Master Agent (Symmetrix only)
# /etc/init.d/eccmad stop
Kill any remaining "mlragent" processes.

Uninstall the EMCpower.LINUX rpm package
# rpm -e EMCpower.LINUX

Upgrade the kernel.

Reboot the machine.

Stop the Navisphere agent (CLARiiON only)
# /etc/init.d/naviagent stop
 
Stop the ECC Master Agent (Symmetrix only)
# /etc/init.d/eccmad stop
Kill any remaining "mlragent" processes.

 

Uncomment references to PowerPath pseudo devices from system configuration files such as /etc/fstab and /etc/lvm/lvm.conf.

Reboot the machine.
 

7 comments:

K. McDonald said...

Thank you very much for writing this tech note. It is exactly what I needed to configure RHEL5 for boot-from-san. I followed it almost word for word, except I've got a Symmetrix instead of a CLARiiON.

RAVI MOKA said...

Can you tell me on how to create/put boot image on the LUN? Do we have to create paritions on the LUN like /boot, swap etc ??

RAVI MOKA said...

Can you please explain me in detail about this?

"Addendum to Standard Linux Build – Partitioning



The name of the volume group created on the SAN device should be “sanvg”. The /boot partition should be create on the SAN device as well.



Continue with the install from this point as described in “Standard Linux Build”.

RAVI MOKA said...

should HBA drive entry in modproble.conf be at the top of the list? My configuration fails during the first boot after powerpath install and after connecting second SAN cable for multipath.

JWest said...
This comment has been removed by the author.
JWest said...

How does done tell MPath to skip gatekeeper devices provided by EMC? Can't think of any reason at all to see them from the OS level, Fdisk, dd, etc.

gkorten said...

Paul,
Have you replicated your OS luns? We do this with windows and maintain like hardware in our DR site. Servers boot without issue. hoping to do the same with RH 5.3