[Linux-cluster] Networking guidelines for RHCS across datacenters
tom at netspot.com.au
Wed Jun 10 01:27:10 UTC 2009
On 05/06/2009, at 6:52 PM, brem belguebli wrote:
> That sounds pretty much to the question I've asked to this mailing-
> list last May (https://www.redhat.com/archives/linux-cluster/2009-May/msg00093.html
> We are in the same setup, already doing "Geo-cluster" with other
> technos and we are looking at RHCS to provide us the same service
> Latency could be a problem indeed if too high , but in a lot of
> cases (many companies for which I've worked), datacenters are a few
> tens of kilometers far, with a latency max close to 1 ms, which is
> not a problem.
> Let's consider this kind of setup, 2 datacenters far from each other
> by 1 ms delay, each hosting a SAN array, each of them connected to 2
> SAN fabrics extended between the 2 sites.
> What reason would prevent us from building Geo-clusters without
> having to rely on a database replication mechanism, as the setup I
> would like to implement would also be used to provide NFS services
> that are disaster recovery proof.
> Obviously, such setup should rely on LVM mirroring to allow a node
> hosting a service to be able to write to both local and distant SAN
I have been wondering whether the same could be done (cross-site RHCS)
using SAN replication and multipath, avoiding LVM mirroring. This is
going to depend strongly on the storage replication failover time; if
the IO to shared storage devices is queued for too long, the cluster
will stop. Does anyone have any experience with how quick this would
need to happen for RHCS to tolerate it?
I have been meaning to test this but have not had a chance...
More information about the Linux-cluster