Hello Robert,<br><br>The other node was previously rebuilt for another temporary purpose and isn't attached to the SAN. The only thing I can think of that might have been out of the ordinary is that I may have pulled the power on the machine while it was shutting down during some file system operation. The disk array itself never lost power.
<br><br>I do have another two machines configured in a different cluster attached to the SAN. CLVM on machines in the other cluster does show the volume that I am having trouble with though those machines do not mount the device. Could this have caused the trouble?
<br><br>More importantly, is there a way to repair the volume? I can see the device with fdisk -l and gfs_fsck completes with errors, but mount attempts always fail with the "mount: /dev/etherd/e1.1 already mounted or /gfs busy" error. I don't know how to debug this at a lower level to understand why this error is happening. Any pointers?
<br><br>Here's what I get from vgdisplay:<br> --- Volume group ---<br> VG Name gfs_vol2<br> System ID<br> Format lvm2<br> Metadata Areas 1<br> Metadata Sequence No 3<br> VG Access read/write
<br> VG Status resizable<br> Clustered yes<br> Shared no<br> MAX LV 0<br> Cur LV 1<br> Open LV 0<br> Max PV 0<br> Cur PV 1
<br> Act PV 1<br> VG Size 1.77 TB<br> PE Size 4.00 MB<br> Total PE 465039<br> Alloc PE / Size 445645 / 1.70 TB<br> Free PE / Size 19394 / 75.76 GB
<br> VG UUID 3ngpos-p9iD-yB5i-vfUp-YQHf-2tVa-vqiSFA<br><br>Thanks for your input. Any help is appreciated!<br><br>Tom<br><br><br><div><span class="gmail_quote">On 12/22/06, <b class="gmail_sendername">Robert Peterson
</b> <<a href="mailto:rpeterso@redhat.com">rpeterso@redhat.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><a href="mailto:bigendian+gfs@gmail.com">
bigendian+gfs@gmail.com</a> wrote:<br>> I had a curios thing happen last night. I have a two-node GFS cluster<br>> configuration that currently has only one node. After shutting down<br>> and restarting the one node, the I couldn't mount my GFS volume
<br>> because it was no longer visible.<br>><br>> The pvdisplay, lvdisplay, and vgdisplay all came up blank. I was able<br>> to use pvcreate --restorefile and vgcfgrestore to get the volume<br>> back. I then got the following message when trying to mount the volume:
<br>><br>> mount: /dev/etherd/e1.1 already mounted or /gfs busy<br>><br>> I was able to gfs_fsck /dev/etherd/e1.1, but I continue to get this<br>> error. Running strace on the mount command turns up this error:
<br>><br>> mount("/dev/etherd/e1.1", "/gfs", "gfs",<br>> MS_MGC_VAL|MS_NOATIME|MS_NODIRATIME, "") = -1 EBUSY (Device or<br>> resource busy)<br>><br>> What could be happening here?
<br>><br>> Thanks,<br>> Tom<br>Hi Tom,<br><br>Hm. Sounds like something bad happened to the logical volume (i.e. LVM).<br><br>Out of curiosity, what was happening on the other node? It wasn't, by<br>chance, doing
<br>an install was it? In the past, I've seen where some versions of the<br>Anaconda installer<br>loads the QLogic driver, detects my SAN and offers to automatically<br>reformat it as<br>part of the installation. I hope that didn't happen to you, or if it
<br>did, that you<br>unchecked the box for your SAN where the eligible drives were listed.<br><br>I'd check all the systems that are attached to the SAN, regardless of<br>whether or<br>not they're part of the cluster. See if one of them has done something
<br>unexpected<br>to the device.<br><br>Regards,<br><br>Bob Peterson<br>Red Hat Cluster Suite<br><br>--<br>Linux-cluster mailing list<br><a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br><a href="https://www.redhat.com/mailman/listinfo/linux-cluster">
https://www.redhat.com/mailman/listinfo/linux-cluster</a><br></blockquote></div><br>