[Linux-cluster] optimising DLM speed?
David Teigland
teigland at redhat.com
Wed Feb 16 17:58:48 UTC 2011
On Wed, Feb 16, 2011 at 02:12:30PM +0000, Alan Brown wrote:
> > You can set it via the configfs interface:
>
> Given 24Gb ram, 100 filesystems, several hundred million of files
> and the usual user habits of trying to put 100k files in a
> directory:
>
> Is 24Gb enough or should I add more memory? (96Gb is easy, beyond
> that is harder)
>
> What would you consider safe maximums for these settings?
>
> What about the following parameters?
>
> buffer_size
> dirtbl_size
Don't change the buffer size, but I'd increase all the hash table sizes to
4096 and see if anything changes.
echo "4096" > /sys/kernel/config/dlm/cluster/rsbtbl_size
echo "4096" > /sys/kernel/config/dlm/cluster/lkbtbl_size
echo "4096" > /sys/kernel/config/dlm/cluster/dirtbl_size
(Before gfs file systems are mounted as Steve mentioned.)
Dave
More information about the Linux-cluster
mailing list