[Linux-cluster] GFS 6 hang during file copy..

Kovacs, Corey J. cjk at techma.com
Wed Nov 24 19:33:04 UTC 2004


I've got a 3 node cluster consisting of DL380's running RHAS3 update 3 with
RHGFS 6.0.0-15.
The locking is being done on the cluster nodes themselves and are not
external. I set up nfs to
export one of the gfs mount points and this was working fine. 

I then tried to populate the gfs mount point with about 24GB if files from
another system using scp
to do some testing. During that copy, gfs froze. That is to say I can still
interact
with the nodes unless I do anything that might query the gfs mount (is ls -l
/gfs1)  this hangs the
machine.

After a reboot things were fine. I did a gfs_tool df /gfs1 and there were
~79000 inodes being used, 
and right around 20 GB (just under actually) used. The filesystem itself is
300+GB. 

My question, if it's the right one, is did I hit a limit in inodes? As I
recall, GFS maps inodes from the 
host somehow. Is there a way to increas the numbe so that I can actually use
the space I have or
must I reduce the size of the filesystem?

Of course I could be completely off base and if so, please let me know...

Thanks


Corey
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20041124/2084fdf7/attachment.htm>


More information about the Linux-cluster mailing list