[Linux-cluster] GFS volume already mounted or /mnt busy?

Robert Peterson rpeterso at redhat.com
Fri Dec 22 21:19:43 UTC 2006


Robert Peterson wrote:
> bigendian+gfs at gmail.com wrote:
>> Hello Robert,
>>
>> The other node was previously rebuilt for another temporary purpose 
>> and isn't attached to the SAN.  The only thing I can think of that 
>> might have been out of the ordinary is that I may have pulled the 
>> power on the machine while it was shutting down during some file 
>> system operation.  The disk array itself never lost power.
>>
>> I do have another two machines configured in a different cluster 
>> attached to the SAN.  CLVM on machines in the other cluster does show 
>> the volume that I am having trouble with though those machines do not 
>> mount the device.  Could this have caused the trouble?
>> More importantly, is there a way to repair the volume?  I can see the 
>> device with fdisk -l and gfs_fsck completes with errors, but mount 
>> attempts always fail with the "mount: /dev/etherd/e1.1 already 
>> mounted or /gfs busy" error.  I don't know how to debug this at a 
>> lower level to understand why this error is happening.  Any pointers?
Hi Tom,

Another thought.  If someone went in there without your knowledge and 
did something bad
like mkfs.ext3 /dev/etherd/e1.1 (or mkfs.vfat, reiserfs, xfs, jffs2, or 
whatever)
or worse, the underlying device, it would also manifest itself as the 
problem you're seeing.

If it were me, I'd do: "gfs_edit /dev/etherd/e1.1" and have a look at 
block 0.
The gfs_edit tool starts you out on block 0x10 (the superblock), so 
you'll have to do 16
"b" keystrokes or else arrow up and change block number to 0 and press 
enter.  The first
16 blocks of the file system should be all zeros, 0x00.  If they look 
like a bunch of numbers
instead, then maybe somebody overwrote your file system.  BTW, gfs_edit 
is a dangerous
tool so don't change anything with it.

Regards,

Bob Peterson
Red Hat Cluster Suite




More information about the Linux-cluster mailing list