[Linux-cluster] GFS2 support EMC storage SRDF??

Bryn M. Reeves bmr at redhat.com
Tue Dec 13 14:26:25 UTC 2011


On 12/12/2011 04:37 AM, Jankowski, Chris wrote:
> GFS2 or any other filesystems being replicated are not aware at all
> of the block replication taking place in the storage layer.  This is
> entirely transparent to the OS and filesystems clustered or not.
> Replication happens entirely in the array/SAN layer and the servers
> are not involved at all.

That's not strictly true. While it is the case that normal I/O issued on 
the active (R1 in SRDF-speak) side of the replicated volume needs no 
special handling things get a bit more complex when the SRDF pair state 
changes (e.g. splits/failovers/partitions).

SRDF pair states govern which side of the replicated volume is writable 
and a change in state may mean a device abruptly becomes write-disabled.

There's currently no mechanism for handling these changes automatically 
on the host side so there is a need for manual intervention (or clever 
scripting) whenever such a change occurs.

> So, there is nothing for Red Hat to support or not support - they
> just do not see it.  Nor do they have any ability to see it even if
> they wanted.  Very often the array ports for replication are on
> separate ports and in separate FC zones.

SRDF is normally used between two or more Symmetrix arrays (often but 
not always in physically separate locations). Replication traffic 
travels over a dedicated inter-array link via a remote link director (a 
specialised symm channel adapter) rather than the regular front end 
director ports of the array.

In a typical deployment the R2 (remote/slave) side of the replicated 
volume is not reachable from the same fabric as the R1 (master).

This means additional considerations are required in a disaster recovery 
scenario since any backup hosts on the R2 side will need to be 
configured to cope with the fact that LUNs and ports will have different 
WWIDs than on the R1 site (this may cause problems for some multipath 
boot implementations for e.g.).

Regards,
Bryn.




More information about the Linux-cluster mailing list