[Linux-cluster] LOCK_DLM Performance under Fire

David Teigland teigland at redhat.com
Thu Apr 7 02:30:37 UTC 2005


On Wed, Apr 06, 2005 at 12:01:02PM -0700, Peter Shearer wrote:

> The app itself is a really old COBOL app built on Liant's RM/Cobol -- an
> abstraction software similar to java which allows the same object code
> to run on Linux, UNIX, and Windows with very little modification through
> a runtime application.  So, while I have access to the source for the
> compiled object, I don't have access to the runtime app code, which is
> really the thing doing all the locking.
> 
> This specific testing app is opening one file with locks, but it's
> beating that file up.  Essentially, it's going through the file and
> performing a series of sorts and searches, which, for the most part,
> would beat up the proc more than the I/O.  The "real" application for
> the most part will not be nearly as intense, but will open probably
> around 100 shared files simultaneously with posix locking.  Would
> adjusting the SHRINK_CACHE_COUNT and SHRINK_CACHE_MAX in lock_dlm.h
> affect this type of application?  Any other tunable parameters which
> will help out?  I'm not tied to DLM at this point...is there another
> mechanism which would do this equally well?

Taking a step back, is this a parallelized/clusterized application?
i.e. will it be running concurrently on different machines with the
data shared using GFS?  If so, then the distributed fcntl locks are
critical.  If not, it would be safe to use the localflocks mount option
which means fcntl locks are no longer translated to distributed locks.

-- 
Dave Teigland  <teigland at redhat.com>




More information about the Linux-cluster mailing list