[Linux-cluster] GFS, iSCSI, multipaths and RAID

michael.osullivan at auckland.ac.nz michael.osullivan at auckland.ac.nz
Fri May 30 09:16:23 UTC 2008


Hi everyone,

We chose not to bond the NICs because we'd heard this does not scale the
bandwidth linearly. To keep performance of the network high we wanted to
allow the load to be spread across multiple links and multipath seemed the
best way.

The iSCSI setup suggested by the article
http://www.pcpro.co.uk/realworld/82284/san-on-the-cheap/page1.html uses
one storage device as the primary storage and the second one as the
secondary storage. The iSCSI target is presented via the first device and
will failover to the second device. This allows for failure of either of
the devices, but does not allow the storage load to be shared amongst the
devices.

By having the setup as described in
http://www.ndsg.net.nz/ndsg_cluster.jpg/view (or
http://www.ndsg.net.nz/ndsg_cluster.jpg/image_view_fullscreen for the
fullscreen view) with multipath we provide two distinct paths between each
server and each storage device, both of which can be used to send/receive
data. By creating a RAID-5 array out of the iSCSI disks I hope I have
allowed both of them to share the storage load.

Our setup is intended to provide diverse protection for the storage system
via:

1) RAID for the storage devices;
2) multipathing over the network - we've had dm multipath recommended
instead of mdadm - any comments?;
3) a cluster for the servers using GFS to allow locking of the storage
system;

but also allows all the components to share the load instead of using a
primary/secondary type setup (which largely "wastes" the scondary
resources). We are going to use IOMeter to test our setup and see how it
performs. We will then run the same tests with different parts of the
network disabled and see what happens.

As usual any comments/suggestions/criticisms are welcome. Thanks for all
the discussion, it has been very useful and enlightening, Mike




More information about the Linux-cluster mailing list