[Linux-cluster] Directories with >100K files

nick at javacat.f2s.com nick at javacat.f2s.com
Tue Jan 20 10:19:21 UTC 2009


Hi all,

cman-2.0.84-2.el5
gfs2-utils-0.1.44-1.el5_2.1
gfs-utils-0.1.17-1.el5
kmod-gfs-0.1.23-5.el5
kmod-gfs2-1.92-1.1.el5
kmod-gfs2-PAE-1.92-1.1.el5
kmod-gfs-PAE-0.1.23-5.el5
openais-0.80.3-15.el5
rgmanager-2.0.38-2.el5

Red Hat Enterprise Linux Server release 5.2 (Tikanga)
kernel 2.6.18-92.1.10.el5PAE

We are running the standard GFS1 that came with RedHat 5.2.

We have a GFS filesystem mounted over iSCSI. When doing an 'ls' on directories with
several thousand files it takes around 10 minutes to get a response back -

# time ls
real    8m58.704s
user    0m1.369s
sys     1m32.641s

# ls | wc -l
120359

We only have this issue on directories with thousands of files in.

Can anyone recommend any GFS tunables to help us out here ?
Should we set statfs_fast to 1 ?
What about glock_purge ?

Here is the fstab entry for the GFS filesystem:
/dev/vggfs/lvol00       /apps                   gfs     _netdev         1 2

Here is gfs_tool df /apps
/apps:
  SB lock proto = "lock_dlm"
  SB lock table = "TEST:GFS1"
  SB ondisk format = 1309
  SB multihost format = 1401
  Block size = 4096
  Journals = 4
  Resource Groups = 3342
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "TEST:GFS1"
  Mounted host data = "jid=1:id=65537:first=0"
  Journal number = 1
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE
  Oopses OK = FALSE

  Type           Total          Used           Free           use%
  ------------------------------------------------------------------------
  inodes         5159283        5159283        0              100%
  metadata       5162977        165010         4997967        3%
  data           208673972      62025003       146648969      30%


Here is the output of gfs_tool gettune /apps | sort

atime_quantum = 3600
complain_secs = 10
demote_secs = 300
depend_secs = 60
entries_per_readdir = 32
glock_purge = 0
greedy_default = 100
greedy_max = 250
greedy_quantum = 25
ilimit1 = 100
ilimit1_min = 1
ilimit1_tries = 3
ilimit2 = 500
ilimit2_min = 3
ilimit2_tries = 10
incore_log_blocks = 1024
inoded_secs = 15
jindex_refresh_secs = 60
lockdump_size = 131072
logd_secs = 1
max_atomic_write = 4194304
max_mhc = 10000
max_readahead = 262144
new_files_directio = 0
new_files_jdata = 0
prefetch_secs = 10
quota_account = 1
quotad_secs = 5
quota_enforce = 1
quota_quantum = 60
quota_scale = 1.0000   (1, 1)
quota_simul_sync = 64
quota_warn_period = 10
reclaim_limit = 5000
recoverd_secs = 60
rgrp_try_threshold = 100
scand_secs = 5
stall_secs = 600
statfs_fast = 0
statfs_slots = 64

Any help appreciated

Regards,
Nick.















More information about the Linux-cluster mailing list