[Linux-cluster] Re: gfs tuning

Derek Anderson Derek.Anderson at compellent.com
Mon Jun 16 17:08:58 UTC 2008


> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-
> bounces at redhat.com] On Behalf Of Terry
> Sent: Monday, June 16, 2008 11:54 AM
> To: linux clustering
> Subject: [Linux-cluster] Re: gfs tuning
> 
> Doh!   Check this out:
> 
> [root at omadvnfs01b ~]# gfs_tool df /data01d
> /data01d:
>   SB lock proto = "lock_dlm"
>   SB lock table = "omadvnfs01:gfs_data01d"
>   SB ondisk format = 1309
>   SB multihost format = 1401
>   Block size = 4096
>   Journals = 2
>   Resource Groups = 16384
>   Mounted lock proto = "lock_dlm"
>   Mounted lock table = "omadvnfs01:gfs_data01d"
>   Mounted host data = "jid=1:id=786434:first=0"
>   Journal number = 1
>   Lock module flags = 0
>   Local flocks = FALSE
>   Local caching = FALSE
>   Oopses OK = FALSE
> 
>   Type           Total          Used           Free           use%
>
---------------------------------------------------------------------
> ---
>   inodes         18417216       18417216       0              100%
>   metadata       21078536       20002007       1076529        95%
>   data           1034059688     744936460      289123228      72%
> 
> 
> The number of inodes is interesting......

If you mean the percentage used, I believe GFS allocates inodes as
needed, so this will always indicate that they are 100% in use.

> 
> 
> On Mon, Jun 16, 2008 at 11:45 AM, Terry <td3201 at gmail.com> wrote:
> > Hello,
> >
> > I have 4 GFS volumes, each 4 TB.  I am seeing pretty high load
> > averages on the host that is serving these volumes out via NFS.  I
> > notice that gfs_scand, dlm_recv, and dlm_scand are running with high
> > CPU%.  I truly believe the box is I/O bound due to high awaits but
> > trying to dig into root cause.  99% of the activity on these volumes
> > is write.  The number of files is around 15 million per TB.   Given
> > the high number of writes, increasing scand_secs will not help.  Any
> > other optimizations I can do?
> >
> > Thanks!
> >
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list