[Linux-cluster] Timeout causing GFS filesystem inaccessibility
Michael Conrad Tadpol Tilstra
mtilstra at redhat.com
Thu Jun 9 14:40:40 UTC 2005
On Thu, Jun 09, 2005 at 08:33:34AM -0400, Kovacs, Corey J. wrote:
> I've seen the same behavior with slocate.cron doing it's thing. I had to add
> gfs to it's filesystems exclude list. My setup is as follows ...
>
> 3 HP-DL380-G3's
> 1 MSA1000
> 6 FC2214 (QL2340) FC cards
>
> The three nodes are set up as lock managers and a 1TB fs created. When
> populating the file system using scp, or rsync etc from another
> machine with approx 400GB worth of 50k files, the target machine would
> become unresponsive. This lead to me moving to the latest version
> available at the time (6.0.2.20-1) and setting up alternate nics for
> lock_gulmd to use, which seems to have help tremendously.
>
> That said, after a the first successful complete data transfer on this
> cluster I went to do a 'du -sh' on the mount point and the machine got
> to a state where it would refuse to fork. Which is exactly what the
> problem was with slocate.cron.
that's not good. Can you do an gulm_tool getstats <masterserver>:lt000 ?
Just want to see how full the queues are.
--
Michael Conrad Tadpol Tilstra
At night as I lay in bed looking at the stars I thought 'Where the hell is
the ceiling?'
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20050609/f3689905/attachment.sig>
More information about the Linux-cluster
mailing list