[Linux-cluster] GFS, iSCSI, multipaths and RAID

Alex Kompel barbos at gmail.com
Wed May 28 23:16:55 UTC 2008


On Wed, May 28, 2008 at 2:37 PM, Ross Vandegrift <ross at kallisti.us> wrote:
> On Wed, May 28, 2008 at 02:18:39PM -0700, Alex Kompel wrote:
>> I would not use multipath I/O with iSCSI unless you have specific
>> reasons for doing so. iSCSI is only as highly-available as you network
>> infrastructure allows it to be. If you have a full failover within the
>> network then you don't need multipath. That simplifies configuration a
>> lot. Provided your network core is fully redundant (both link and
>> routing layers), you can connect 2 NICs on each server to separate
>> switches and bond them (google for "channel bonding"). Once you have
>> redundant network connection you can use the setup from the article I
>> posted earlier. This will give you iSCSI endpoint failover.
>
> This depends on a lot of things.  In all of the iSCSI storage systems
> I'm familiar with, the same target is provided redundantly via
> different portal IPs.  This provides failover in the case of an iscsi
> controller failing on the storage system.  The network can be as
> redundant as you like, but without multipath, you won't survive a
> portal failure.

In this case the portal failure is handled by host failover mechanisms
(heartbeat, RedHat cluster, etc) and connection failure is handled by
the network layer. Sometimes you have to use multipath (for example,
if there is no way to do transparent failover on storage controllers)
but it adds extra complexity on the initiator side so if there is a
way to avoid it why not do it?

> If you bond between two different switches, you'll only be able to do
> failover between the NICs.  If you use multipath, you can round-robin
> between them to provide a greater bandwidth overhead.

Same goes for bonding: link aggregation with active-active bonding.

-Alex




More information about the Linux-cluster mailing list