[Linux-cluster] Question :)
Gerald G. Gilyeat
ggilyeat at jhsph.edu
Tue May 31 18:06:27 UTC 2005
First - thanks for the help the last time I poked my pointy little head in here.
Things have been -much- more stable since we bumped the lock limit to 2097152 ;)
However, we're still running into the occasional "glitch" where it seems like a single process is locking up -all- disk access on us, until it completes its operation.
Specifically, we see this when folks are doing rsyncs of large amounts of data (one of my faculty has been trying to copy over a couple thousand 16MB files). Even piping tar through ssh (from target machine, ssh user at host "cd /data/dir/path; tar -cpsf -" | tar -xpsf -) results in similar behaviour.
Is this tunable, or simply a fact of life that we're simply going to have to live with? it only occurs with big, or long, writes. Reads aren't a problem (it just takes 14 hours to dump 1.5TB to tape...)
Thanks!
--
Jerry Gilyeat, RHCE
Systems Administrator
Molecular Microbiology and Immunology
Johns Hopkins Bloomberg School of Public Health
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20050531/d43d9d7c/attachment.htm>
More information about the Linux-cluster
mailing list