I recently had to load CentOS 5.6 on several HP BL2x220C blade servers to run Enomaly SpotCloud. One of the requirements was to provision disk for KVM virtual machine storage. This could be local disk or optionally iSCSI disk. The following describes the steps I went through to configure iSCSI successfully.


1. You will need to configure your storage system. I was using a HDS HNAS Mercury cluster. The configuration of the HNAS is probably beyond the scope of this post, but in essence you need to create a File System of your required size. Then assign that File System to an EVS (Hitachi terminology for a virtual storage system) with an assigned cluster node and IP address on the storage VLAN. You then need to create iSCSI Logical Units within the File System. One LUN will be required for each host. Lastly create iSCSI targets within the EVS iSCSI domain with access configuration only permitted from the host that will use it along with the LUN ID and LUN name. You will end up with is a series of Globally Unique Names that are of a finite size (i.e. 500GB)  that are only accessible from a single host: iqn.2011-04.spotcloud:sc-evs-iscsi01.sc-target01.


2. Back to the CentOS side of things - make sure your interfaces are configured correctly and you can ping the storage system. I have two Virtual Connect modules in the HP C7000 enclosure - hence two interface were available. Static IPs were used on the storage network. I edited:


/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/sysconfig/network
Text


3. Make sure the iSCSI daemons are installed. You can do this via yum or from the original source media. Via yum:


yum install iscsi
Text


Via virtual media:


mount /dev/cdrom /mnt
cd /mnt/CentOS
rpm -ivh iscsi*
cd /
umount /mnt
Text


Don't forget to eject the virtual media.


4. Make sure iSCSI starts on boot and start the daemon:


chkconfig iscsi on
service iscsi start
Text


5. Discover your iSCSI targets:


iscsiadm -m discovery -t sendtargets -p 10.255.4.10
Text


The IP address is that of the storage system.


6. Delete any unnecessary iSCSI nodes:


service iscsi stop
iscsiadm -m node <nodename> -o delete
service iscsi start
Text


The <nodename> is the UIN mentioned earlier. Sometimes you will always discover multiple nodes - so you need to configure the storage system to filter available LUNs by client source IP address.


7. Work out which device is the iSCSI node:


fdisk -l
Text


8. Create a partition then format it:


fdisk /dev/sdb
mkfs.ext4 /dev/sdb1
Text


9. Label the device:


e2label /dev/sdb1 /sc-node01
Text


10. Configure the mount in /etc/fstab (note the _netdev mount option to ensure the iSCSI LUN is mounted after networking has been brought up):


LABEL=/sc-node01 /var/lib/xen/images ext3 defaults,_netdev,noatime 0 0
Text


And that's it - you are in business. Lastly, if you are interested here is the Virtual Connect configuration used to configure the blades. This configures blade 1A and 1B interfaces 1 and 2. Interface 1 is assigned untagged VLAN of 1050 (eth0) and tagged VLAN 1051 (eth0.1051). Interface 2 is assigned untagged VLAN 1052 (eth1) - which is the storage network.


add profile D4-C2-B01 -NoDefaultEnetConn -NoDefaultFcConn -NoDefaultFcoeConn
add enet-connection D4-C2-B01
add enet-connection D4-C2-B01
add server-port-map D4-C2-B01:1 SC-Management VlanID=1050 Untagged=True
add server-port-map D4-C2-B01:1 SC-VM VlanID=1051
add server-port-map D4-C2-B01:2 SC-iSCSI VlanID=1052 Untagged=True
assign profile D4-C2-B01 enc0:1A

add profile D4-C2-B02 -NoDefaultEnetConn -NoDefaultFcConn -NoDefaultFcoeConn
add enet-connection D4-C2-B02
add enet-connection D4-C2-B02
add server-port-map D4-C2-B02:1 SC-Management VlanID=1050 Untagged=True
add server-port-map D4-C2-B02:1 SC-VM VlanID=1051
add server-port-map D4-C2-B02:2 SC-iSCSI VlanID=1052 Untagged=True
assign profile D4-C2-B02 enc0:1B
Text


Originally published 10 May 2013.