How To Configure iSCSI initiator with multipathing

What is Multipathing?

Device mapper multipathing (DM-Multipath) allows you to configure multiple I/O paths between server nodes and storage arrays into a single device. These I/O paths are physical SAN connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O paths, creating a new device that consists of the aggregated paths.

Why use DM-Multipath?


- Redundancy
DM-Multipath can provide fail-over in an active/passive configuration. In an active/passive configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipath switches to an alternate path.

- Improved Performance
DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and dynamically re-balance the load.

“Active/Passive Multipath Configuration with One RAID storage device”

Hardware requirement :
- 2 SAN switches
- 2 HBAs on the server for creating 2 I/O paths from the server to a storage device, and
- 2 RAID controllers on storage.

There are many points of possible failure in this configuration:
- HBA failure
- Cable failure
- SAN switch failure
- Array controller port failure

With DM-Multipath configured, a failure at any of these points will cause DM-Multipath to switch to the alternate I/O path. In this scenario, If the storage itself becomes unavailable you will still lose access to the storage.

Configuring Multipathing

To know interfaces attached and IP address assigned:

We will configure multipathing over two iSCSI paths on vm1.

First of all make sure that the device-mapper-multipath package is installed on vm1.

Once device-mapper-multipath package is installed you will need to create a configuration file for the multipath daemon, /etc/multipath.conf. The easiest way to create this file is to use mpathconf utility.
If there already is a file called /etc/multipath.conf the mpathconf command will edit that file, if no such file exists mpathconf will copy the default configuration from /usr/share/doc/device-mapper-multipath-*/multipath.conf. If that file does not exist then mpathconf will create new configuration file from scratch.

To create a default configuration, start and enable the multipathd daemon you can use the following command:

Before you begin, Make sure that the iSCSI target define on filer is still running.

On filer add ACLs for vm1 on the storage2 network to your iqn.2014-03.com.example.storage:first target.

Make sure that /etc/tgt/targets.conf looks like this.

Restart iSCSI target daemon on filer to activate changes.

On vm1 set the iSCSI initiator name to iqn.2014-03.com.example.storage:first

On vm1 set all the default iSCSI timeouts to 2 seconds.

On vm1 discover and connect(login) to the target you created on filer using both the storage1 and storage2 networks.

On vm1 create a default configuration file for the multipathd daemon, without using User Friendly Names. Do not yet start the multipathd daemon.

Add the following line to the defaults section of /etc/multipath.conf, this will make sure that spaces are stripped from WWIDs, which could lead to some confusion if we don't.

Start the multipathd daemon. You should now have a new devicenode /dev/mapper/1IET_00010001

Testing Multipathing

We will create a partition on multipathed storage, put a file system on it, mount it, and test your multipathing setup.

Make sure that the iSCSI target define on filer is still running, and multipathing is configured on vm1.

1. Create a 128MB partition on your multipathed device.

2. Make sure that vm1 has a device node for the new partition.

3. Create a ext4 file system on new partition and temporarily mount it on /mnt.

4. Observe the output from multipath -ll on vm1. Both paths should report active ready running.

5. Bring down the eth2 interface on vm1 and attempt to write a file in /mnt. Run the command multipath -ll. You might see some iSCSI errors on the console.

6. Bring eth2 interface on vm1 back up, and examine the output of multipath -ll again.

Active/Passive Multipath Configuration is done.

You may also like...

%d bloggers like this: