[Linux-cluster] GFS Mounting Issues

Caron, Chris caronc at navcanada.ca
Thu Jul 31 16:40:20 UTC 2008


I can't seem to create a successful GFS Mount. I was hoping someone
could help me out.
First of all, here are my details:
- I'm running RHEL52
- Kernel(uname -r) : 2.6.18-92.el5
- I'm using GFS (oppose to GFS2) because all reports on the Redhat
website says that GFS2 is still in development.  The go on about not
using it in a production environment (thus my reasoning).

We have a SAN disk on order... but for now (and just to get the hang of
GFS) I'm using the tgtadm tool to export a Logical Volume from one
computer, and I'm remotely connecting to it on another using iscsiadm.
This I have all working wonderfully.
These tools i'm using are found in:
 - iscsi-initiator-utils-6.2.0.868-0.7.el5
 - scsi-target-utils-0.0-0.20070620snap.el5
 - lsscsi-0.17-3.el5

from one of the cluster nodes, i establish a remote connection to the
exported filesystem. I run and retrieve: 
[root at node01 ~]# lsscsi 
[0:0:0:0]    disk    FUJITSU  MAP3367NC        5608  /dev/sda
[0:0:6:0]    process PE/PV    1x3 SCSI BP      1.1   -       
[37:0:0:0]   storage IET      Controler        0001  -       
[37:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdb

Where /dev/sdb is my new mounted virtual disk connected as it should be
(this part works fine).

[root at node01 ~]# gfs_mkfs -j5 -p lock_dlm -t rhc1:gfs.db.cldn.rhc1
/dev/sdb -O
Device:                    /dev/sdb
Blocksize:                 4096
Filesystem Size:           360400
Journals:                  5
Resource Groups:           8
Locking Protocol:          lock_dlm
Lock Table:                rhc1:gfs.db.cldn.rhc1

Syncing...
All Done

Where: 'rhc1' is the name of the cluster and there are 5 nodes

Below is the entry in the <resources> section of the
'/etc/cluster/cluster.conf'. 
<clusterfs device="/dev/sdb" force_unmount="0" fsid="24701" fstype="gfs"
mountpoint="/mnt/cldn_pgsql" name="gfs.db.cldn.rhc1" options=""/>
I arbitrarily chose the fsid because I have no clue what it means :).
I'm assuming it's just got to be unique from other GFS mount entries.
Now for the part that frustrates me:

[root at node01 ~]# clusvcadm -e gfs.db.cldn.rhc1
Local machine trying to enable service:gfs.db.cldn.rhc1...Service does
not exist

(I'm a newbie at this... so maybe this isn't supposed to work
anyways...)  Then I tried this:
[root at node01 ~]# mount /dev/sdb /mnt/cldn_pgsql/
/sbin/mount.gfs: node not a member of the default fence domain
/sbin/mount.gfs: error mounting lockproto lock_dlm

[root at node01 ~]# lsmod | grep -e "lock\|gfs"
lock_nolock             7552  0 
gfs                   252612  0 
lock_dlm               24268  1 
gfs2                  347688  3 lock_nolock,gfs,lock_dlm
dlm                   108085  13 lock_dlm
configfs               28753  2 dlm


Can someone please guide me as to what I'm doing wrong?
Please :)

Chris






More information about the Linux-cluster mailing list