Hello Cedric<br><br>Are you using gfs or gfs2? if you are using gfs i recommend to use gfs2<br><br><div class="gmail_quote">2012/6/3 Cedric Kimaru <span dir="ltr"><<a href="mailto:rhel_cluster@ckimaru.com" target="_blank">rhel_cluster@ckimaru.com</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Fellow Cluster Compatriots,<br>I'm looking for some guidance here. Whenever my rhel 5.7 cluster get's into "<b><font style="color:rgb(255,0,0)" size="4">LEAVE_START_WAIT</font></b>" on on a given iscsi volume, the following occurs: <br>
<ol><li>I can't r/w io to the volume.</li><li>Can't unmount it, from any node.</li><li>In flight/pending IO's are impossible to determine or kill since lsof on the mount fails. Basically all IO operations stall/fail.<br>
</li></ol><p>So my questions are:</p><ol><li>What does the output from group_tool -v really indicate, <b><font style="color:rgb(255,0,0)" size="4">"00030005 LEAVE_START_WAIT 12 c000b0002 1" </font></b>? Man on group_tool doesn't list these fields.<br>
</li><li>Does anyone have a list of what these fields represent ?</li><li>Corrective actions. How do i get out of this state without rebooting the entire cluster ?</li><li>Is it possible to determine the offending node ?<br>
</li></ol>thanks,<br>-Cedric<br><br><br>//misc output<br><br>
root@bl13-node13:~# clustat <br>Cluster Status for cluster3 @ Sat Jun 2 20:47:08 2012<br>Member Status: Quorate<br><br> Member Name ID Status<br> ------ ---- ---- ------<br>
bl01-node01 1 Online, rgmanager<br> bl04-node04 4 Online, rgmanager<br> bl05-node05 5 Online, rgmanager<br> bl06-node06 6 Online, rgmanager<br>
bl07-node07 7 Online, rgmanager<br> bl08-node08 8 Online, rgmanager<br> bl09-node09 9 Online, rgmanager<br> bl10-node10 10 Online, rgmanager<br>
bl11-node11 11 Online, rgmanager<br> bl12-node12 12 Online, rgmanager<br> bl13-node13 13 Online, Local, rgmanager<br>
bl14-node14 14 Online, rgmanager<br> bl15-node15 15 Online, rgmanager<br><br><br> Service Name Owner (Last) State <br>
------- ---- ----- ------ ----- <br> service:httpd bl05-node05 started <br>
service:nfs_disk2 bl08-node08 started <br><br><br>root@bl13-node13:~# group_tool -v<br>
type level name id state node id local_done<br>fence 0 default 0001000d none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 clvmd 0001000c none <br>
[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 cluster3_disk1 00020005 none <br>[4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 cluster3_disk2 00040005 none <br>[4 5 6 7 8 9 10 11 13 14 15]<br>
dlm 1 cluster3_disk7 00060005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 cluster3_disk8 00080005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 cluster3_disk9 000a0005 none <br>
[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 disk10 000c0005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>dlm 1 rgmanager 0001000a none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>
dlm 1 cluster3_disk3 00020001 none <br>[1 5 6 7 8 9 10 11 12 13]<br>dlm 1 cluster3_disk6 00020008 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>gfs 2 cluster3_disk1 00010005 none <br>
[4 5 6 7 8 9 10 11 12 13 14 15]<br><b><font style="color:rgb(255,0,0)" size="4">gfs 2 cluster3_disk2 00030005 LEAVE_START_WAIT 12 c000b0002 1<br>[4 5 6 7 8 9 10 11 13 14 15]</font></b><br>gfs 2 cluster3_disk7 00050005 none <br>
[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>gfs 2 cluster3_disk8 00070005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>gfs 2 cluster3_disk9 00090005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>
gfs 2 disk10 000b0005 none <br>[1 4 5 6 7 8 9 10 11 12 13 14 15]<br>gfs 2 cluster3_disk3 00010001 none <br>[1 5 6 7 8 9 10 11 12 13]<br>gfs 2 cluster3_disk6 00010008 none <br>
[1 4 5 6 7 8 9 10 11 12 13 14 15]<br><br>root@bl13-node13:~# gfs2_tool list<br>253:15 cluster3:cluster3_disk6<br>253:16 cluster3:cluster3_disk3<br>253:18 cluster3:disk10<br>253:17 cluster3:cluster3_disk9<br>253:19 cluster3:cluster3_disk8<br>
253:21 cluster3:cluster3_disk7<br>253:22 cluster3:cluster3_disk2<br>253:23 cluster3:cluster3_disk1<br><br>root@bl13-node13:~# lvs<br> Logging initialised at Sat Jun 2 20:50:03 2012<br> Set umask from 0022 to 0077<br>
Finding all logical volumes<br> LV VG Attr LSize Origin Snap% Move Log Copy% Convert<br> lv_cluster3_Disk7 vg_Cluster3_Disk7 -wi-ao 3.00T <br>
lv_cluster3_Disk9 vg_Cluster3_Disk9 -wi-ao 200.01G <br> lv_Cluster3_libvert vg_Cluster3_libvert -wi-a- 100.00G <br>
lv_cluster3_disk1 vg_cluster3_disk1 -wi-ao 100.00G <br> lv_cluster3_disk10 vg_cluster3_disk10 -wi-ao 15.00T <br>
lv_cluster3_disk2 vg_cluster3_disk2 -wi-ao 220.00G <br> lv_cluster3_disk3 vg_cluster3_disk3 -wi-ao 330.00G <br>
lv_cluster3_disk4_1T-kvm-thin vg_cluster3_disk4_1T-kvm-thin -wi-a- 1.00T <br> lv_cluster3_disk5 vg_cluster3_disk5 -wi-a- 555.00G <br>
lv_cluster3_disk6 vg_cluster3_disk6 -wi-ao 2.00T <br> lv_cluster3_disk8 vg_cluster3_disk8 -wi-ao 2.00T <br><br>
<br>--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>