[Linux-cluster] RHEL4U5 GFS: 257, 000 small file creation and removal

Robert Hurst rhurst at bidmc.harvard.edu
Tue Dec 9 13:57:03 UTC 2008


A runaway application print job created this large number of identically
small files yesterday, which in turned caused mucho problems for us
trying to do a directory listing (ls) and removal (rm).  Eventually, we
had to fence the node that had incurred a system load of over 800(!),
and upon reboot, I removed the files using something more GFS-friendly,
i.e., find spool -name 'PRT_4*' -exec rm -fv {} \;

Temporary print files are now being created and removed cleanly and
efficiently, as before.

But while that solved the cleanup, and even with umount / mount the
other 2 two nodes' GFS spool directory, we are still experiencing
latency when doing a simple ls in that directory -- not nearly as bad
before, but nonetheless a few seconds can go by just to print < 100
entries.  It is naturally concerning to us, because no other directory
in that same GFS filesystem (or any other) is giving us any such latency
issues.

The spool directory entry itself grew to 64kb from its usual 2kb, to
accommodate all those prior filenames ... is that something that is
required to be re-mkdir'ed in order to avoid this GFS latency?

Its proc/mounts and GFS statistics follow:


/dev/mapper/VGCOMMON-lvolmycroft /mycroft gfs rw,noatime,nodiratime 0 0

root at db4 [~]$ gfs_tool counters /mycroft

                                  locks 30633
                             locks held 16922
                           freeze count 0
                          incore inodes 14838
                       metadata buffers 1433
                        unlinked inodes 26
                              quota IDs 4
                     incore log buffers 35
                         log space used 2.59%
              meta header cache entries 94
                     glock dependencies 9
                 glocks on reclaim list 0
                              log wraps 22
                   outstanding LM calls 0
                  outstanding BIO calls 0
                       fh2dentry misses 0
                       glocks reclaimed 15220147
                         glock nq calls 367966145
                         glock dq calls 367542403
                   glock prefetch calls 8630117
                          lm_lock calls 9397143
                        lm_unlock calls 8889487
                           lm callbacks 18518378
                     address operations 313689508
                      dentry operations 6984706
                      export operations 0
                        file operations 174055404
                       inode operations 11849074
                       super operations 187059465
                          vm operations 92
                        block I/O reads 6013763
                       block I/O writes 1219630

root at db4 [~]$ gfs_tool stat /mycroft
  mh_magic = 0x01161970
  mh_type = 4
  mh_generation = 39817
  mh_format = 400
  mh_incarn = 0
  no_formal_ino = 26
  no_addr = 26
  di_mode = 0777
  di_uid = 0
  di_gid = 500
  di_nlink = 7
  di_size = 3864
  di_blocks = 1
  di_atime = 1202021809
  di_mtime = 1228830595
  di_ctime = 1228830595
  di_major = 0
  di_minor = 0
  di_rgrp = 0
  di_goal_rgrp = 0
  di_goal_dblk = 0
  di_goal_mblk = 0
  di_flags = 0x00000001
  di_payload_format = 1200
  di_type = 2
  di_height = 0
  di_incarn = 0
  di_pad = 0
  di_depth = 0
  di_entries = 7
  no_formal_ino = 0
  no_addr = 0
  di_eattr = 0
  di_reserved =
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 

Flags:
  jdata

root at db4 [~]$ gfs_tool stat /mycroft/spool/print
  mh_magic = 0x01161970
  mh_type = 4
  mh_generation = 5775277
  mh_format = 400
  mh_incarn = 0
  no_formal_ino = 1180030
  no_addr = 1180030
  di_mode = 02777
  di_uid = 0
  di_gid = 500
  di_nlink = 4
  di_size = 65536
  di_blocks = 4219
  di_atime = 1202022045
  di_mtime = 1228830803
  di_ctime = 1228830803
  di_major = 0
  di_minor = 0
  di_rgrp = 1179676
  di_goal_rgrp = 4521984
  di_goal_dblk = 0
  di_goal_mblk = 130
  di_flags = 0x00000003
  di_payload_format = 0
  di_type = 2
  di_height = 1
  di_incarn = 0
  di_pad = 0
  di_depth = 13
  di_entries = 99
  no_formal_ino = 0
  no_addr = 0
  di_eattr = 0
  di_reserved =
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
00 00 00 00 00 00 00 00 

Flags:
  jdata
  exhash





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20081209/865014e5/attachment.htm>


More information about the Linux-cluster mailing list