[Linux-cluster] Re: gfs tuning
Tiago Cruz
tiagocruz at forumgdh.net
Mon Jun 16 17:00:43 UTC 2008
Doh!! :)
Is it normal?
$ gfs_tool df /mnt
/mnt:
SB lock proto = "lock_dlm"
SB lock table = "hotsite:gfs-00"
SB ondisk format = 1309
SB multihost format = 1401
Block size = 4096
Journals = 2
Resource Groups = 424
Mounted lock proto = "lock_dlm"
Mounted lock table = "hotsite:gfs-00"
Mounted host data = "jid=1:id=196609:first=0"
Journal number = 1
Lock module flags = 0
Local flocks = FALSE
Local caching = FALSE
Oopses OK = FALSE
Type Total Used Free use%
------------------------------------------------------------------------
inodes 854 854 0 100%
metadata 48761 2259 46502 5%
data 27652913 1061834 26591079 4%
I thinking my load average very high under apacheab...
On Mon, 2008-06-16 at 11:53 -0500, Terry wrote:
> Doh! Check this out:
>
> [root at omadvnfs01b ~]# gfs_tool df /data01d
> /data01d:
> SB lock proto = "lock_dlm"
> SB lock table = "omadvnfs01:gfs_data01d"
> SB ondisk format = 1309
> SB multihost format = 1401
> Block size = 4096
> Journals = 2
> Resource Groups = 16384
> Mounted lock proto = "lock_dlm"
> Mounted lock table = "omadvnfs01:gfs_data01d"
> Mounted host data = "jid=1:id=786434:first=0"
> Journal number = 1
> Lock module flags = 0
> Local flocks = FALSE
> Local caching = FALSE
> Oopses OK = FALSE
>
> Type Total Used Free use%
> ------------------------------------------------------------------------
> inodes 18417216 18417216 0 100%
> metadata 21078536 20002007 1076529 95%
> data 1034059688 744936460 289123228 72%
>
>
> The number of inodes is interesting......
>
>
> On Mon, Jun 16, 2008 at 11:45 AM, Terry <td3201 at gmail.com> wrote:
> > Hello,
> >
> > I have 4 GFS volumes, each 4 TB. I am seeing pretty high load
> > averages on the host that is serving these volumes out via NFS. I
> > notice that gfs_scand, dlm_recv, and dlm_scand are running with high
> > CPU%. I truly believe the box is I/O bound due to high awaits but
> > trying to dig into root cause. 99% of the activity on these volumes
> > is write. The number of files is around 15 million per TB. Given
> > the high number of writes, increasing scand_secs will not help. Any
> > other optimizations I can do?
> >
> > Thanks!
> >
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Tiago Cruz
http://everlinux.com
Linux User #282636
More information about the Linux-cluster
mailing list