[Linux-cluster] GFS2 poor performance

Steven Whitehouse swhiteho at redhat.com
Tue Nov 11 09:23:43 UTC 2008


Hi,

On Mon, 2008-11-10 at 16:03 -0200, Fabiano F. Vitale wrote:
>  Setting demote_secs to 30 and glock_purge to 70 in a gfs filesystem 
> increased frightfully performance of commands like ls, df, in a directory 
> that has many files.
> But the gfs2 filesystem doesn't have the attribute glock_purge to tune.
> Exists any attribute  in gfs2 in place of glock_purge which exists only in 
> gfs1
> 
> thanks
> 
> Fabiano
> 
> 
That is entirely deliberate. GFS2 is self-tuning so far as glocks goes,
so such settings are not needed. The demote time setting for glocks in
GFS2 only applies to non-inode glocks and it might well go away in the
future when we have an automatic way to deal with them too.

One of the goals of GFS2 is to reduce the need for users to have to
change obscure settings in order to get the best performance in any
particular situation,

Steve.

> ----- Original Message ----- 
> From: "Jeff Sturm" <jeff.sturm at eprize.com>
> To: "linux clustering" <linux-cluster at redhat.com>
> Sent: Thursday, November 06, 2008 5:53 PM
> Subject: RE: [Linux-cluster] GFS2 poor performance
> 
> 
> >I looked over the summit document you referenced below.  The value of 
> >demote_secs mentioned is an example setting, and unfortunately no 
> >recommendations or rationale accompany this.
> >
> > For some access patterns you can get better performance by actually 
> > increasing demote_secs.  For example, we have a node that we routinely 
> > rsync a file tree onto using a GFS partition.  Increasing demote_secs from 
> > 300 to 86400 reduced the average rsync time by a factor of about 4.  The 
> > reason is that this node has little lock contention and needs to lock each 
> > file every time we start an rsync process.  With demote_secs=300, it was 
> > doing much more work to reacquire locks on each run.  Whereas 
> > demote_secs=86400 allowed the locks to persist up to a day, since the 
> > overall number of files in our application is bounded such that they will 
> > fit in buffer cache, together with locks.
> >
> > At another extreme, we have an application that creates a lot of files but 
> > seldom opens them on the same node.  In this case there is no value in 
> > holding onto the locks, so we set demote_secs to a small value and 
> > glock_purge as high as 70 to ensure locks are quickly released in memory.
> >
> > The best advice I can give in general is to experiment with different 
> > settings for demote_secs and glock_purge while watching the output of 
> > "gfs_tool counters" to see how they behave.
> >
> > Jeff
> >
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com 
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Fabiano F. Vitale
> > Sent: Tuesday, November 04, 2008 3:19 PM
> > To: linux clustering
> > Subject: Re: [Linux-cluster] GFS2 poor performance
> >
> > Hi,
> >
> > for cluster purpose the two nodes are linked by a  patch cord cat6 and the 
> > lan interfaces are gigabit.
> >
> > All nodes have a Fibre Channel Emulex Corporation Zephyr-X LightPulse and 
> > the Storage is a HP EVA8100
> >
> > I read the document
> > http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Summit08presentation_GFSBestPractices_Final.pdf
> > which show some parameters to tune and one of  them is  demote_secs, to 
> > adjust to 100sec
> >
> > thanks
> >
> >> What sort of network and storage device are you using?
> >>
> >> Also, why set demote_secs so low?
> >>
> >> -----Original Message-----
> >> From: linux-cluster-bounces at redhat.com
> >> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of ffv at tjpr.jus.br
> >> Sent: Tuesday, November 04, 2008 2:13 PM
> >> To: linux-cluster at redhat.com
> >> Subject: [Linux-cluster] GFS2 poor performance
> >>
> >> Hi all,
> >>
> >> I´m getting a very poor performance using GFS2.
> >> I have two qmail (mail) servers and one gfs2 filesystem shared by them.
> >> In this case, each directory in GFS2 filesystem may have upon to 10000
> >> files (mails)
> >>
> >> The problem is in performance of some operations like ls, du, rm, etc
> >> for example,
> >>
> >> # time du -sh /dados/teste
> >> 40M     /dados/teste
> >>
> >> real    7m14.919s
> >> user    0m0.008s
> >> sys     0m0.129s
> >>
> >> this is unacceptable
> >>
> >> Some attributes i already set using gfs2_tool:
> >>
> >> gfs2_tool settune /dados demote_secs 100 gfs2_tool setflag jdata
> >> /dados gfs2_tool setflag sync /dados gfs2_tool setflag directio /dados
> >>
> >> but the performance is still very bad
> >>
> >>
> >> Anybody know how to tune the filesystem for a acceptable performance
> >> working with directory with 10000 files?
> >> thanks for any help
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >>
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster 
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list