[Linux-cluster] Two-node cluster GFS2 confusing
rpeterso at redhat.com
Mon Jun 16 12:20:44 UTC 2014
----- Original Message -----
> Hello everyone,
> I'm a new man on linux cluster. I have built a two-node cluster (without
> qdisk), includes:
> Redhat 6.4
> My cluster could fail-over (back and forth) between two nodes for these 3
> resources: ClusterIP, WebFS (Filesystem GFS2 mount /dev/sdc on
> /mnt/gfs2_storage), WebSite ( apache service)
> My problem occurs when I stop/start node in the following order: (when both
> nodes started)
> 1. Stop: node1 (shutdown) -> all resource fail-over on node2 -> all resources
> still working on node2
> 2. Stop: node2 (stop service: pacemaker then cman) -> all resources stop (of
> 3. Start: node1 (start service: cman then pacemaker) -> only ClusterIP
> started, WebFS failed, WebSite not started
> I don't have any glues to trace down this case, I just guess this problem
> comes from locking file system, please suggest me some advices.
Some thoughts on your problem:
(1) If this is truly Redhat 6.4, and you have a support contract with Red Hat,
you should call the support number with Global Support Services and file a
ticket. They'll be able to help.
(2) You didn't explain what your symptoms were? In what way does it fail?
(3) Why do you suspect "this problem comes from locking file system"? Do you
mean from GFS2? What is the symptom that causes you to think it might be
the file system? Were there messages on the console or dmesg to indicate
a kernel issue?
(4) I thought RHEL6.4 has cman/rgmanager, not pacemaker.
Red Hat File Systems
More information about the Linux-cluster