[Linux-cluster] GFS continue to reboot nodes

Bob Peterson rpeterso at redhat.com
Mon Jan 25 13:56:40 UTC 2010


----- "Muhammad Ammad Shah" <mammadshah at hotmail.com> wrote:
| there is nothing in /var/log/messages. but when i checked console of
| the node there was some message related to GFS.
| DLM id:0 ...
| 
| so i removed GFS and switched back to File system(ext3) resource.
| 
| can i install oracle on Resource File system(ext3) ?
| 
| or how to troubleshoot GFS reboot..
| need help,

Hi Muhammad,

To answer a few questions on this thread:
The number of journals should not cause GFS any problems.  It is
perfectly okay to create a number of journals greater or equal to
the number of nodes.  So 3 and 4 are both okay.

Second, if you created the volume group without clvmd running, you
may have created it without the clustered bit set. You may need to
do this command:  vgchange -cy /dev/your_vg

Third, GFS does not reboot the system.  The system is most likely
being fenced due to a problem discovered by the cluster infrastructure.
That problem might be GFS's fault, but we have no way to know.

Fourth, if your file system is shared between nodes, you probably
want to use a shared file system.  Using ext3 will only work properly
if the file system is only mounted by a single computer.  If it's
mounted by more than one computer, use GFS or GFS2.

Fifth, to debug the problem please send the console messages to
this mailing list so we can tell more about the symptoms.

I hope this helps.

Regards,

Bob Peterson
Red Hat File Systems




More information about the Linux-cluster mailing list