[Linux-cluster] CLVM/GFS will not mount or communicate with cluster

Barry Brimer lists at brimer.org
Sun Dec 3 21:59:46 UTC 2006


This is a repeat of the post I made a few minutes ago.  I thought adding a
subject would be helpful.


I have a 2 node cluster for a shared GFS filesystem.  One of the nodes fenced
the other, and the node that got fenced is no longer able to communicate with
the cluster.

While booting the problem node, I receive the following error message:
Setting up Logical Volume Management:  Locking inactive: ignoring clustered
volume group vg00

I have compared /etc/lvm/lvm.conf files on both nodes.  They are identical.  The
disk (/dev/sda1) is listed when typing "fdisk -l"

There are no iptables firewalls active (although /etc/sysconfig/iptables exists,
iptables is chkconfig'd off).  I have written a simple iptables logging rule
(iptables -I INPUT -s <problem node> -j LOG) on the working node to verify that
packets are reaching the working node, but no messages are being logged in
/var/log/messages on the working node that acknowledge any cluster activity
from the problem node.

Both machines have the same RH packages installed and are mostly up to date,
they are missing the same packages, none of which involve the kernel, RHCS or
GFS.

When I boot the problem node, it successfully starts ccsd, but it fails after a
while on cman and fails after a while on fenced.  I have given the clvmd
process an hour, and it still will not start.

vgchange -ay on the problem node returns:

# vgchange -ay
  connect() failed on local socket: Connection refused
  Locking type 2 initialisation failed.

I have the contents of /var/log/messages on the working node and the problem
node at the time of the fence, if that would be helpful.

Any help is greatly appreciated.

Thanks,
Barry




More information about the Linux-cluster mailing list