[Linux-cluster] How does caching work in GFS1?

Peter Schobel pschobel at 1iopen.net
Tue Aug 24 18:24:02 UTC 2010


Hi Steven,

I am not sure if the software uses inotify or dnotify. I am trying to
find out. The editor in question is Slickedit. We have found out that
we can make some changes in the user's tag file which provides some
optimization by excluding any unnecessary directories.

Do you happen to know the reason why I seem to be getting a cache
savings when running du on one directory and not on another one as
posted above?

[testuser at buildmgmt-000 testdir]$ for ((i=0;i<=3;i++)); do time du
>/dev/null; done

real    2m10.133s
user    0m0.193s
sys     0m14.579s

real    0m1.948s
user    0m0.043s
sys     0m1.048s

real    0m0.277s
user    0m0.034s
sys     0m0.240s

real    0m0.274s
user    0m0.033s
sys     0m0.239s

[testuser at buildmgmt-000 main0]$ for ((i=0;i<=3;i++)); do time du
>/dev/null; done

real    5m41.908s
user    0m0.596s
sys     0m36.141s

real    3m45.757s
user    0m0.574s
sys     0m43.868s

real    3m17.756s
user    0m0.484s
sys     0m44.666s

real    3m15.267s
user    0m0.535s
sys     0m45.981s

In the first example the directory size is 2G and contains 64423
files. In the second example the directory size is 30G and contains
164812 files.

On the larger directory it looks like there is some initial savings
after the first run but in most of the test results the runs of du on
the large directory take roughly the same amount of time on each run.
On the smaller directory the speed increase on subsequent runs is
dramatic. I assume I am running into some caching limit but I am
unsure what that limit is or if it is possible to increase somehow.

Any help in understanding how this works would be greatly appreciated.

Thanks,

Peter
~

On Mon, Aug 23, 2010 at 3:12 AM, Steven Whitehouse <swhiteho at redhat.com> wrote:
> Hi,
>
> On Fri, 2010-08-20 at 09:34 -0700, Peter Schobel wrote:
>> Update on the use case: The main headache for our developers is the
>> intellisense feature in the graphical ide which suffers from the
>> performance problem.
>>
>> Peter
>
> Just a hunch but does that use i/dnotify or something along those lines?
> That is common in GUIs and not supported on gfs/gfs2, although the
> chances are that it will work for local fs modificatons only,
>
> Steve.
>
>> ~
>>
>> On Wed, Aug 11, 2010 at 2:35 PM, Jeff Sturm <jeff.sturm at eprize.com> wrote:
>> >> -----Original Message-----
>> >> From: linux-cluster-bounces at redhat.com
>> > [mailto:linux-cluster-bounces at redhat.com]
>> >> On Behalf Of Peter Schobel
>> >> Sent: Wednesday, August 11, 2010 3:28 PM
>> >> To: linux clustering
>> >> Subject: Re: [Linux-cluster] How does caching work in GFS1?
>> >>
>> >> Increasing demote_secs did not seem to have an appreciable effect.
>> >
>> > We run some hosts with demote_secs=86400, for what it's worth.  They
>> > tend to go through a "cold start" each morning, but are responsive for
>> > the remainder of the day.
>> >
>> >> The du command is a simplification of the use case. Our developers run
>> >> scripts which make tags in source code directories which require
>> >> stat'ing the files.
>> >
>> > Gotcha.  I don't have many good suggestions for version control, but I
>> > can offer commiseration.  Some systems are worse than others.
>> > (Subversion for instance tends to create lots of little lock files, and
>> > performs very poorly on just about every filesystem we've tried.)
>> >
>> > How much RAM do you have?  All filesystems like plenty of cache.
>> >
>> > One thing you can do is run "gfs_tool counters <mount-point>" a few
>> > times during your 20GB test, that may give you some insight.  For
>> > example, does the number of locks increase steadily or does it plateau?
>> > Does it abruptly drop following the test?  Does the "glocks reclaimed"
>> > number accumulate rapidly?  When locks are held, stat() operations tend
>> > to be very fast.  When a lock has to be obtained, that's when they are
>> > slow.
>> >
>> > (Any cluster engineers out there, feel free to tell me if any of this is
>> > right or wrong--I've had to base my understanding of GFS on a lot of
>> > experimentation and empirical evidence, not on a deep understanding of
>> > the software.)
>> >
>> > -Jeff
>> >
>> >
>> >
>> > --
>> > Linux-cluster mailing list
>> > Linux-cluster at redhat.com
>> > https://www.redhat.com/mailman/listinfo/linux-cluster
>> >
>>
>>
>>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
Peter Schobel
~




More information about the Linux-cluster mailing list