[Linux-cluster] GFS tuning for combined batch / interactive use

Steven Whitehouse swhiteho at redhat.com
Fri Dec 17 16:50:59 UTC 2010


Hi,

On Fri, 2010-12-17 at 17:35 +0100, Kevin Maguire wrote:
> Hi
> 
> Bob/Steven/Ben - many thanks for responding.
> 
> > There is some helpful stuff here on the tuning side:
> >
> > http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_tuning
> 
> Indeed, we have implemented many these suggestions, "fast statfs" is on, 
> -r 2048 was used, quotas off, the cluster interconnect is a dedicated 
> gigabit LAN, hardware RAID (RAID10) on the SAN, and so on. Maybe we are 
> just at the limit of the hardware.
> 
> I have also asked and it seems the one issue that might cause slowdown, 
> multiple nodes all trying to access the same inode (say all updating files 
> in a common directory), should not happen with our application. I am told 
> that essentially batch jobs will create their own working directory when 
> executing, and work almost exclusively within that subtree. Interactive 
> work is in another tree entirely.
> 
> However I'd like to double check that - but how? When we looked at Lustre 
> for a similar app there was a /proc interface that you could probe to see 
> what files were being opened/read/written/closed by each connected node - 
> does GFS offer something similar? Would mounting debugfs help me there?
> 
> Kevin
> 
You can get a glock dump via debugfs which may show up contention, looks
for type 2 glocks which have lots of lock requests queued but not
granted. The lock requests (holders) are tagged with the relevant
process. In rhel6/upstream there are gfs2 tracepoints which can be used
to get information dynamically. These can also give some pointers to the
processes involved,

Steve.
 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list