[Linux-cluster] umount hung single node

Daniel McNeil daniel at osdl.org
Thu Mar 10 01:26:34 UTC 2005


I upgraded to 2.6.11 and the latest cvs a few days ago.
I started my tests on Mar  7 16:01 and they hung on Mar  9 12:34.
This is a 3 node cluster, but the test that hung only has 1
node with gfs mounted and it is trying to unmount:

root     12500 12494  0 12:34 ?        00:00:01 umount /gfs_stripe5

$ cat /proc/12500/wchan
.text.lock.ast

dlm_astd is spinning as top shows:

12302 root      20  -5     0    0    0 R 99.9  0.0 280:28.23 dlm_astd

I've attached the output from /proc/cluster/dlm_debug.

Is there any other useful data to pull off the node to see what is
going on?

Daniel

-------------- next part --------------
efs processed 0 requests
stripefs resend marked requests
stripefs resent 0 requests
stripefs recover event 6652 finished
stripefs move flags 1,0,0 ids 6652,6652,6652
stripefs move flags 0,1,0 ids 0,6656,0
stripefs move use event 6656
stripefs recover event 6656 (first)
stripefs add nodes
stripefs total nodes 1
stripefs rebuild resource directory
stripefs rebuilt 0 resources
stripefs recover event 6656 done
stripefs move flags 0,0,1 ids 0,6656,6656
stripefs process held requests
stripefs processed 0 requests
stripefs recover event 6656 finished
stripefs move flags 1,0,0 ids 6656,6656,6656
stripefs move flags 0,1,0 ids 0,6660,0
stripefs move use event 6660
stripefs recover event 6660 (first)
stripefs add nodes
stripefs total nodes 1
stripefs rebuild resource directory
stripefs rebuilt 0 resources
stripefs recover event 6660 done
stripefs move flags 0,0,1 ids 0,6660,6660
stripefs process held requests
stripefs processed 0 requests
stripefs recover event 6660 finished
stripefs move flags 1,0,0 ids 6660,6660,6660


More information about the Linux-cluster mailing list