[Linux-cluster] Re: gfs tuning

Terry td3201 at gmail.com
Mon Jun 16 16:53:59 UTC 2008


Doh!   Check this out:

[root at omadvnfs01b ~]# gfs_tool df /data01d
/data01d:
  SB lock proto = "lock_dlm"
  SB lock table = "omadvnfs01:gfs_data01d"
  SB ondisk format = 1309
  SB multihost format = 1401
  Block size = 4096
  Journals = 2
  Resource Groups = 16384
  Mounted lock proto = "lock_dlm"
  Mounted lock table = "omadvnfs01:gfs_data01d"
  Mounted host data = "jid=1:id=786434:first=0"
  Journal number = 1
  Lock module flags = 0
  Local flocks = FALSE
  Local caching = FALSE
  Oopses OK = FALSE

  Type           Total          Used           Free           use%
  ------------------------------------------------------------------------
  inodes         18417216       18417216       0              100%
  metadata       21078536       20002007       1076529        95%
  data           1034059688     744936460      289123228      72%


The number of inodes is interesting......


On Mon, Jun 16, 2008 at 11:45 AM, Terry <td3201 at gmail.com> wrote:
> Hello,
>
> I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
> averages on the host that is serving these volumes out via NFS.  I
> notice that gfs_scand, dlm_recv, and dlm_scand are running with high
> CPU%.  I truly believe the box is I/O bound due to high awaits but
> trying to dig into root cause.  99% of the activity on these volumes
> is write.  The number of files is around 15 million per TB.   Given
> the high number of writes, increasing scand_secs will not help.  Any
> other optimizations I can do?
>
> Thanks!
>




More information about the Linux-cluster mailing list