[Linux-cluster] GFS: drop_count and drop_period tuning

Hagmann, Michael Michael.Hagmann at hilti.com
Mon Sep 10 21:17:56 UTC 2007

When you are on RHEL4.5 then I highly suggest you to use the new
glock_purge Parameter for every gfs Filesystem add to /etc/rc.local
gfs_tool settune / glock_purge 50
gfs_tool settune /scratch glock_purge 50
also this Parameter has to set new on every mount. That mean when you
umount it and then mount it again, run the /etc/rc.local again, otherway
the parameter are gone!
maybe also checkout this page -->


From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Claudio Tassini
Sent: Montag, 10. September 2007 13:19
To: linux clustering
Subject: [Linux-cluster] GFS: drop_count and drop_period tuning

Hi all, 

I have a four-nodes GFS cluster on RH 4.5 (last versions, updated
yesterday). There are three GFS filesystems ( 1 TB, 450 GB and 5GB),
serving some mail domains with postfix/courier imap in a "maildir"

As you can suspect, this is not exactly the best for GFS: we have a lot
(thousands) of very small files (emails) in a very lot of directories.
I'm trying to tune up things to reach the best performance. I found that
tuning the drop_count parameter in /proc/cluster/lock_dlm/drop_period ,
setting it to a very large value (it was 500000 and now, after a memory
upgrade, I've set it to 1500000 ), uses a lot of memory (about 10GB out
of 16 that I've installed in every machine) and seems to "boost"
performance limiting the iowait CPU usage. 

The bad thing is that when I umount a filesystem, it must clean up all
that locks (I think), and sometimes it causes problems to the whole
cluster, with the other nodes that stop writes to the filesystem while
I'm umounting on one node only.  
Is this normal? How can I tune this to clean memory faster when I umount
the FS? I've read something about setting more gfs_glockd daemons per fs
with the num_glockd mount option, but it seems to be quite deprecated
because it shouldn't be necessary.. 

Claudio Tassini 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070910/ddb8f24b/attachment.htm>

More information about the Linux-cluster mailing list