[Linux-cluster] GFS - Preformance small files

Andrew A. Neuschwander andrew at ntsg.umt.edu
Mon Nov 30 17:34:17 UTC 2009


Leonardo D'Angelo Gonçalves wrote:
> Hi
> 
> I have a GFS cluster on RHEL4.8 which one filesystem (10G) with  various
> directories and sub-directories and small files about 5Kb. When I run the
> command "du-sh" in the directory it generates about 1500 IOPS on the disks,
> for GFS it takes time about 5 minutes and 2 second for ext3 filesyem. Could
> someone help me with this problem. follows below the output of gfs_tool
> Why for GFS it takes 5 minutes and ext3 2 seconds ? Is there any relation ?
> ilimit1 = 100
> ilimit1_tries = 3
> ilimit1_min = 1
> ilimit2 = 500
> ilimit2_tries = 10
> ilimit2_min = 3
> demote_secs = 300
> incore_log_blocks = 1024
> jindex_refresh_secs = 60
> depend_secs = 60
> scand_secs = 5
> recoverd_secs = 60
> logd_secs = 1
> quotad_secs = 5
> inoded_secs = 15
> glock_purge = 0
> quota_simul_sync = 64
> quota_warn_period = 10
> atime_quantum = 3600
> quota_quantum = 60
> quota_scale = 1.0000   (1, 1)
> quota_enforce = 1
> quota_account = 1
> new_files_jdata = 0
> new_files_directio = 0
> max_atomic_write = 4194304
> max_readahead = 262144
> lockdump_size = 131072
> stall_secs = 600
> complain_secs = 10
> reclaim_limit = 5000
> entries_per_readdir = 32
> prefetch_secs = 10
> statfs_slots = 64
> max_mhc = 10000
> greedy_default = 100
> greedy_quantum = 25
> greedy_max = 250
> rgrp_try_threshold = 100
> statfs_fast = 0
> seq_readahead = 0
> 
> 
> 
> ------------------------------------------------------------------------

Leonardo

I'm not sure if 4.8 supports it, but in 5.4, the plock_rate_limit option in cluster.conf has a 
terrible default which can causes this type of slow down. I have these statements in my <cluster> 
block in my cluster.conf:

<dlm plock_rate_limit="0"/>
<gfs_controld plock_rate_limit="0" plock_ownership="0"/>

Searching the web, there are a lot of recommendations to set plock_ownership to 1, but its benefit 
is more workload dependent than plock_rate_limit.

-Andrew




More information about the Linux-cluster mailing list