[Linux-cluster] GFS 6.0.2-24 + NFS (ALSO)

Corey Kovacs cjkovacs at verizon.net
Sun Feb 27 16:10:55 UTC 2005


Also, on this same cluster, when using the "gulm stonith" fencing module in 
clumanager, I get errors generated by ...

log_err("Protocol Mismatch: We're %#x and They're %#x\n",   GIO_WIREPROT_VERS, 
x_proto); which, by looking at the surrounding code seems to indicate the the 
fence device login is failing. I am trying to fence using fence_ilo against 
DL360's with iLO firmaware 1.64. My config looks something like this...

fence_devidces {
	iLO_1 {
		agent="fence_ilo"
		hostame=1.2.3.4
		login=admin_user
		passwd=somepassword
		action=off		
	}

}


and in the nodes file I reference the fence like this..


nodes{

	somenode{
		ip_interfaces{
			eth0="2.3.4.5"
		}
		fence {
			iLO {
				iLO_1{}
			}
		}
}
}

I pass no options since the only option I use is "off" and it is defined in 
the fence.ccs file.


Any ideas as to what might be causing this?


Corey




On Saturday 26 February 2005 11:57, Corey Kovacs wrote:
> I have a 5 node cluster running GFS 6.0.2-24 with kernel 2.4.21-27.0.1 on
> RHASu4. I have three GFS filesystems 20GB, 40GB and ~1.8TB mounted from an
> MSA1000 SAN. The large partion is being re-exported via NFS. When copying a
> large file (~450GB) to the nfs re-exported GFS filesystem, the filesystem
> system hangs across all nodes. When the offending node is shutdown (never
> gets fenced and I am using fence_ilo) the system "wakes up". The nodes are
> DL360's with 2GB of ram each. The are using qlogic 2340 fibre cards and
> redhat branded drivers. Three of the 5 nodes are configured as lock
> managers. I've seen messages about lock_gulm not freeing mem. Are there
> issues with NFS and GFS together? What things should be done to tune such a
> configuration? Any help would be greatly appreicated.
>
>
> Corey
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> http://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list