[Linux-cluster] optimising DLM speed?

Steven Whitehouse swhiteho at redhat.com
Wed Feb 16 19:25:01 UTC 2011


On Wed, 2011-02-16 at 19:07 +0000, Alan Brown wrote:
> Steve:
> To add some interest (and give you numbers to work with as far as dlm 
> config tuning goes), here are a selection of real world lock figures 
> from our file cluster (cat $d | wc -l)
> /sys/kernel/debug/dlm/WwwHome-gfs2_locks  162299  (webserver exports)
> /sys/kernel/debug/dlm/soft2-gfs2_locks  198890  (Mainly IDL software - 
> it's hopelessly inefficient, 32Gb partition)
> /sys/kernel/debug/dlm/home-gfs2_locks  74649 (users' /home directories, 
> 150Gb partition)
> /sys/kernel/debug/dlm/User1_locks  318337 (thunderbird, mozilla, 
> openoffice caches, 200gb partition)
> /sys/kernel/debug/dlm/Peace04-gfs2_locks  265955  (solar wind data)
> /sys/kernel/debug/dlm/Peace05-gfs2_locks  332267
> /sys/kernel/debug/dlm/Peace06-gfs2_locks  283588
A faster way to just grab lock numbers is to grep for gfs2
in /proc/slabinfo as that will show how many are allocated at any one

> At the other end of the spectrum:
> /sys/kernel/debug/dlm/xray0-gfs2_locks  24917 (solar observation data)
> /sys/kernel/debug/dlm/xray2-gfs2_locks  558
> /sys/kernel/debug/dlm/cassini2-gfs2_locks  598 (cassini probe data from 
> Saturn)
> /sys/kernel/debug/dlm/cassini3-gfs2_locks  80
> /sys/kernel/debug/dlm/cassini4-gfs2_locks  246
> /sys/kernel/debug/dlm/rgoplates-gfs2_locks 27 (global archive of 100 
> years' worth of photographic plates from Greenwich observatory)
> Directories may have up to 90k entries in them, although we try very 
> hard to encourage users to use nested structures and keep directories 
> below 1000 entries for human readability (exceptions tend to be mirrors 
> of offsite archives), but the counterpoint to is that it drives the 
> number of directories up - which is why I was asking about the 
> dirtbl_size entry.
The dirtbl refers to the DLM's resource directory and not to the
directories which are in the filesystem. So the dirtbl will scale
according to the number of dlm locks, which in turn scales with the
number of cached inodes.

Directories of the size (number of entries) which you have indicated
should not be causing a problem as lookup should still be quite fast at
that scale.

> ~98% of directories are below 4000 entries.
> FSes usually have 400k-2M inodes in use.
The important thing from the dlm tuning point of view is how many of
those inodes are cached on each node at once, so using the slabinfo
trick above will show that.

> Does that help with tuning recommendations?
It is always useful to have some background information like this, and I
think as a first step trying Dave's suggested DLM table config changes
is a good plan,


> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

More information about the Linux-cluster mailing list