[Linux-cluster] GFS: drop_count and drop_period tuning

Claudio Tassini claudio.tassini at gmail.com
Mon Sep 10 11:19:25 UTC 2007


Hi all,
I have a four-nodes GFS cluster on RH 4.5 (last versions, updated
yesterday). There are three GFS filesystems ( 1 TB, 450 GB and 5GB), serving
some mail domains with postfix/courier imap in a "maildir" configuration.

As you can suspect, this is not exactly the best for GFS: we have a lot
(thousands) of very small files (emails) in a very lot of directories. I'm
trying to tune up things to reach the best performance. I found that tuning
the drop_count parameter in /proc/cluster/lock_dlm/drop_period , setting it
to a very large value (it was 500000 and now, after a memory upgrade, I've
set it to 1500000 ), uses a lot of memory (about 10GB out of 16 that I've
installed in every machine) and seems to "boost" performance limiting the
iowait CPU usage.

The bad thing is that when I umount a filesystem, it must clean up all that
locks (I think), and sometimes it causes problems to the whole cluster, with
the other nodes that stop writes to the filesystem while I'm umounting on
one node only.
Is this normal? How can I tune this to clean memory faster when I umount the
FS? I've read something about setting more gfs_glockd daemons per fs with
the num_glockd mount option, but it seems to be quite deprecated because it
shouldn't be necessary..



-- 
Claudio Tassini
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070910/19959a63/attachment.htm>


More information about the Linux-cluster mailing list