[Linux-cluster] GFS RG size (and tuning)

Jos Vos jos at xos.nl
Sat Oct 27 12:57:26 UTC 2007


On Fri, Oct 26, 2007 at 07:57:18PM -0400, Wendy Cheng wrote:

> 1. 3TB is not "average size". Smaller RG can help with "df" command - 
> but if your system is congested, it won't help much.

The df also takes ages on an almost idle system.  Also, the system often
needs to do rsyncs on large trees and this takes a very long time too.

In <http://sourceware.org/cluster/faq.html#gfs_tuning> it is suggested
that you should then make the RG larger (i.e. less RGs).  As this requires
shuffling aroung with TB's of data before recreating a GFS fs, I want to
have some idea of what my chances are that this is usefull.

> 2. The gfs_scand issue is more to do with the number of glock count. One 
> way to tune this is via purge_glock tunable. There is an old write-up in:
> http://people.redhat.com/wcheng/Patches/GFS/readme.gfs_glock_trimming.R4 
> . It is for RHEL4 but should work the same way for RHEL5.

I'll try.  I assume I can do this per system (so that I don't have to
bring the whole cluster down, only stop the cluster services and unmount
the GFS volumes per node)?

Any chance this patch will make it into the standard RHEL-package?
I want to avoid to maintain my own patched packages, although as long
as gfs.ko is in the separate kmod-gfs package that's doable.

-- 
--    Jos Vos <jos at xos.nl>
--    X/OS Experts in Open Systems BV   |   Phone: +31 20 6938364
--    Amsterdam, The Netherlands        |     Fax: +31 20 6948204




More information about the Linux-cluster mailing list