[Linux-cluster] GFS2 - force release locks

Giorgio Luchi giorgio.luchi at welcomeitalia.it
Tue Apr 27 14:15:43 UTC 2010


Hi to all,

We're currently working on setting up a three nodes cluster for managing
e-mail. Each node has a local disk for operating system (CentOS 5.4) and
three disks shared via GFS2. We plan to split domain across the three nodes
to avoid concurrency locking (as much as possible): each node read/write
only on one of the shared disks; we also plan to achieve fault tolerance
using a Cisco CSS that has, for each domain, one node as primary server and
a second node as sorry server in case of fault (or in case of maintenance):
in that case one node will take care of the domains of the faulty node and
it will read/write on two shared disk.

We have this question. Suppose we shut down a node for maintenance. The
Cisco CSS recognizes the primary server is down and so it switches the
traffic to the sorry server; the cluster does its work and so no problems
are noticed by customer. After few days we restart the node; the Cisco CSS
restore the traffic to the primary server. At this point all is working
again in the "default" scenario, but the domains served by the "maintained
node" will have problem in performance due the lock owned by the sorry
server. Is there a way to force a node to release all the lock related to a
directory (or to a mount point)? If possible, we'd like to do this without
umount the shared disk, because to do so it's necessary also to restart all
the services that use the three shared disks.

Thanks in advance, regards
Giorgio Luchi






More information about the Linux-cluster mailing list