[Linux-cluster] Cluster Project FAQ - GFS tuning section]

Riaan van Niekerk riaan at obsidian.co.za
Tue Jan 23 15:44:56 UTC 2007



Jon Erickson wrote:
> On 1/23/07, David Teigland <teigland at redhat.com> wrote:
>> On Tue, Jan 23, 2007 at 08:39:32AM -0500, Wendell Dingus wrote:
>> > I don't know where that breaking point is but I believe _we've_ stepped
>> > over it.
>>
>> The number of files in the fs is a non-issue; usage/access patterns is
>> almost always the issue.
>>
>> > 4-node RHEL3 and GFS6.0 cluster with (2) 2TB filesystems (GULM and no
>> > LVM) versus
>> > 3-node RHEL4 (x86_64) and GFS6.1 cluster with (1) 8TB+ filesystem (DLM
>> > and LVM and way faster hardware/disks)
>> >
>> > This is a migration from the former to the latter, so quantity/size of
>> > files/dirs is mostly identical. Files being transferred from customer
>> > sites to the old servers never cause more than about 20% CPU load and
>> > that usually (quickly) falls to 1% or less after the initial xfer
>> > begins. The new servers run to 100% where they usually remain until the
>> > transfer completes. The current thinking as far as reason is the same
>> > thing being discussed here.
>>
>> This is strange, are you mounting with noatime?  Also, try setting 
>> this on
>> each node before it mounts gfs:
>>
>> echo "0" > /proc/cluster/lock_dlm/drop_count
> What does this do?
> 
> 

The /proc/cluster/lock_dlm/drop_count file is used to tune the number of 
locks that lock_dlm keeps in its cache. 0 sets it to unlimited. For more 
info, see:

"What does the /proc/cluster/lock_dlm/drop_count file do and why do some 
nodes exceed this value?"
http://kbase.redhat.com/faq/FAQ_85_9320.shtm

Riaan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: riaan.vcf
Type: text/x-vcard
Size: 310 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070123/f6bbc39c/attachment.vcf>


More information about the Linux-cluster mailing list