[Linux-cluster] GFS 6.0 lt_high_locks value in cluster.ccs

Jonathan Woytek woytek+ at cmu.edu
Mon Jan 9 19:41:36 UTC 2006


I've been trying to improve the performance of our GFS system since it 
went live a little over a year ago.  Recently, while trying to improve 
gigabit network performance on another machine that seemed to be having 
some serious issues, I disabled hyperthreading in the BIOS and 
performance went through the roof.  "Good!" said I.  Seeing the 
performance increase there, I also disabled hyperthreading on two of our 
main GFS servers providing samba and NFS access to data on GFS 
filesystems (our backbone is iSCSI over GbE).  Performance also 
generally increased there, but I started to notice two other issues.

Issue #1 is with samba performance:  While general access and read/write 
speeds are much faster now, there are odd delays when trying to copy a 
file to the filesystem over Samba and when browsing around the 
filesystem via Samba.  The copy to the filesystem delay occurs as soon 
as the client system (Windows XP) is told to copy a file to the mapped 
network drive--the window involved will hang for a few seconds, then the 
copy will start and will move along at a good speed.  The delays during 
browsing happen only occasionally, and don't seem to be related to 
folder contents.

Issue #2 MAY be the cause of Issue #1.  This is hard to determine right 
now.  Issue #2 is that we are now hitting the highwater mark for locks 
in lock_gulmd almost all day long.  This used to happen only 
occasionally, so we didn't worry about it too much.  When it used to 
happen in the past, it would cause the user experience for Samba users 
to display the hangs during navigation (though nobody ever mentioned a 
problem copying files to the system).

So, now to my question:  I read on this list in a previous post about 
the lt_high_locks value in cluster.ccs.  Is this a value that can be 
changed during runtime, or am I going to have to bring all the 
lock_gulmd's down to change this value?

jonathan
-- 
Jonathan Woytek                 w: 412-681-3463         woytek+ at cmu.edu
NREC Computing Manager          c: 412-401-1627         KB3HOZ
PGP Key available upon request




More information about the Linux-cluster mailing list