<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii">
<META NAME="Generator" CONTENT="MS Exchange Server version 5.5.2658.2">
<TITLE>RE: [Linux-cluster] Problem with RHEL3 and GFS-6.0.0.10, Kernel Panic</TITLE>
</HEAD>
<BODY>
<P><FONT SIZE=2>>>On Thu, Sep 16, 2004 at 11:05:02AM +0100, Paulo Sousa wrote:</FONT>
<BR><FONT SIZE=2>>> I' testing the GFS in RHEL3 but I have some problems.</FONT>
<BR><FONT SIZE=2>>> </FONT>
<BR><FONT SIZE=2>>> I have 2 servers connect to a shared SCSI storage and one</FONT>
<BR><FONT SIZE=2>>> of the serves is the lock server (I don't have redundancy at this</FONT>
<BR><FONT SIZE=2>>> moment to lock_server, it is just for testing)</FONT>
<BR><FONT SIZE=2>>> </FONT>
<BR><FONT SIZE=2>>> Server1 (mount gfs filesystem + lock_server)</FONT>
<BR><FONT SIZE=2>>> Server2 (mount gfs filessystem)</FONT>
<BR><FONT SIZE=2>>> </FONT>
<BR><FONT SIZE=2>>> This is the test I have made in the server 1</FONT>
<BR><FONT SIZE=2>>> </FONT>
<BR><FONT SIZE=2>>> /etc/init.d/lock_gulmd stop</FONT>
</P>
<P><FONT SIZE=2>>You have a single lock server. This is where all of the lock state is</FONT>
<BR><FONT SIZE=2>>stored. The lock state is what keeps the different nodes mounting gfs</FONT>
<BR><FONT SIZE=2>>from corrupting data. You have no redundancy in the lock state. You</FONT>
<BR><FONT SIZE=2>>stopped the lock server. The lock state was lost. The cluster cannot</FONT>
<BR><FONT SIZE=2>>continue. The nodes killed themselves rather than let the filesystem</FONT>
<BR><FONT SIZE=2>>meta data get corrupted.</FONT>
<BR><FONT SIZE=2>></FONT>
<BR><FONT SIZE=2>>If you want to be able to stop lock servers, you MUST have redundancy in</FONT>
<BR><FONT SIZE=2>>the lock servers. For gulm this means you need three nodes.</FONT>
</P>
<P><FONT SIZE=2>Tank you, but my problem is: The system gives kernel panic and stop, then I need make reboot manual.</FONT>
</P>
</BODY>
</HTML>