[Linux-cluster] GFS lock cache or bug?
Ja S
jas199931 at yahoo.com
Thu May 8 14:28:52 UTC 2008
Hi Wendy:
Thank you very much for the kind answer.
Unfortunately, I am using Red Hat Enterprise Linux WS
release 4 (Nahant Update 5) 2.6.9-42.ELsmp.
When I ran gfs_tool gettune /mnt/ABC, I got:
ilimit1 = 100
ilimit1_tries = 3
ilimit1_min = 1
ilimit2 = 500
ilimit2_tries = 10
ilimit2_min = 3
demote_secs = 300
incore_log_blocks = 1024
jindex_refresh_secs = 60
depend_secs = 60
scand_secs = 5
recoverd_secs = 60
logd_secs = 1
quotad_secs = 5
inoded_secs = 15
quota_simul_sync = 64
quota_warn_period = 10
atime_quantum = 3600
quota_quantum = 60
quota_scale = 1.0000 (1, 1)
quota_enforce = 1
quota_account = 1
new_files_jdata = 0
new_files_directio = 0
max_atomic_write = 4194304
max_readahead = 262144
lockdump_size = 131072
stall_secs = 600
complain_secs = 10
reclaim_limit = 5000
entries_per_readdir = 32
prefetch_secs = 10
statfs_slots = 64
max_mhc = 10000
greedy_default = 100
greedy_quantum = 25
greedy_max = 250
rgrp_try_threshold = 100
There is no glock_purge option. I will try to tune
demote_secs, but I don't think it will fix 'ls -la'
issue.
By the way, could you please kindly direct me to a
place where I can find detailed explanations of these
tunable options?
Best,
Jas
--- Wendy Cheng <s.wendy.cheng at gmail.com> wrote:
> Ja S wrote:
> > Hi, All:
> >
>
> I have an old write-up about GFS lock cache issues.
> Shareroot people had
> pulled it into their web site:
>
http://open-sharedroot.org/Members/marc/blog/blog-on-gfs/glock-trimming-patch/?searchterm=gfs
>
> It should explain some of your confusions. The
> tunables described in
> that write-up are formally included into RHEL 5.1
> and RHEL 4.6 right now
> (so no need to ask for private patches).
>
> There is a long story about GFS(1)'s "ls -la"
> problem that one time I
> did plan to do something about it. Unfortunately I'm
> having a new job
> now so the better bet is probably going for GFS2.
>
> Will pass some thoughts about GFS1's "ls -la" when I
> have some spare
> time next week.
>
> -- Wendy
>
> > I used to 'ls -la' a subdirecotry, which contains
> more
> > than 30,000 small files, on a SAN storage long
> time
> > ago just once from Node 5, which sits in the
> cluster
> > but does nothing. In other words, Node 5 is an
> idel
> > node.
> >
> > Now when I looked at /proc/cluster/dlm_locks on
> the
> > node, I realised that there are many PR locks and
> the
> > number of PR clocks is pretty much the same as the
> > number of files in the subdirectory I used to
> list.
> >
> > Then I randomly picked up some lock resources and
> > converted the second part (hex number) of the name
> of
> > the lock resources to decimal numbers, which are
> > simply the inode numbers. Then I searched the
> > subdirectory and confirmed that these inode
> numbers
> > match the files in the subdirectory.
> >
> >
> > Now, my questions are:
> >
> > 1) how can I find out which unix command requires
> what
> > kind of locks? Does the ls command really need PR
> > lock?
> >
> > 2) how long GFS caches the locks?
> >
> > 3) whether we can configure the caching period?
> >
> > 4) if GFS should not cache the lock for so many
> days,
> > then does it mean this is a bug?
> >
> > 5) Is that a way to find out which process
> requires a
> > particular lock? Below is a typical record in
> > dlm_locks on Node 5. Is any piece of information
> > useful for identifing the process?
> >
> > Resource d95d2ccc (parent 00000000). Name (len=24)
> "
> > 5 cb5d35"
> > Local Copy, Master is node 1
> > Granted Queue
> > 137203da PR Master: 73980279
> > Conversion Queue
> > Waiting Queue
> >
> >
> > 6) If I am sure that no processes or applications
> are
> > accessing the subdirectory, then how I can force
> GFS
> > release these PR locks so that DLM can release the
> > corresponding lock resources as well.
> >
> >
> > Thank you very much for reading the questions and
> look
> > forward to hearing from you.
> >
> > Jas
> >
> >
> >
>
____________________________________________________________________________________
> > Be a better friend, newshound, and
> > know-it-all with Yahoo! Mobile. Try it now.
>
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> >
>
https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
>
https://www.redhat.com/mailman/listinfo/linux-cluster
>
____________________________________________________________________________________
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now. http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ
More information about the Linux-cluster
mailing list