[Linux-cluster] GNBD and gfs - wrong FS type
Hal
hal_bg at yahoo.com
Wed Jul 18 11:58:06 UTC 2007
hallo
I have trouble mounting GNBD inported gfs on both nodes of my test clusuer. If
the lock is set to "lock_nolock" it mounts fine but this is not what i want.
When I use lock_dlm I get:
mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
What I am doing wrong?
Total output follows (Selinux is NOT in enforcing mode):
[root at node2 ~]# modprobe gnbd
[root at node2 ~]# modprobe gfs2
[root at node2 ~]# modprobe gfs
[root at node2 ~]# modprobe lock_dlm
[root at node2 ~]# gnbd_import -n -i 192.168.0.60
gnbd_import: created directory /dev/gnbd
gnbd_import: created gnbd device global_disk
gnbd_recvd: gnbd_recvd started
[root at node2 ~]# cd /etc/init.d/
[root at node2 init.d]# ./cman start
Starting cluster:
Loading modules... done
Mounting configfs... done
Starting ccsd... done
Starting cman... done
Starting daemons... done
Starting fencing... done
[ OK ]
[root at node2 ~]# gfs_mkfs -p lock_dlm -t testc:gfs1 -j6 /dev/gnbd/global_disk
This will destroy any data on /dev/gnbd/global_disk.
It appears to contain a gfs filesystem.
Are you sure you want to proceed? [y/n] y
Device: /dev/gnbd/global_disk
Blocksize: 4096
Filesystem Size: 851880
Journals: 6
Resource Groups: 14
Locking Protocol: lock_dlm
Lock Table: testc:gfs1
Syncing...
All Done
[root at node2 ~]# mount -t gfs /dev/gnbd/global_disk /mnt
mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
[root at node2 ~]# dmesg |tail
GFS: fsid=testc:gfs1.0: Scanning for log elements...
GFS: fsid=testc:gfs1.0: Found 0 unlinked inodes
GFS: fsid=testc:gfs1.0: Found quota changes for 0 IDs
GFS: fsid=testc:gfs1.0: Done
SELinux: initialized (dev gnbd0, type gfs), uses xattr
audit(1184744195.259:4): avc: denied { getattr } for pid=1848 comm="hald"
name="global_disk" dev=tmpfs ino=19253 scontext=system_u:system_r:hald_t:s0
tcontext=root:object_r:device_t:s0 tclass=blk_file
Trying to join cluster "lock_dlm", "testc:gfs1"
Joined cluster. Now mounting FS...
GFS: fsid=testc:gfs1.4294967295: can't mount journal #4294967295
GFS: fsid=testc:gfs1.4294967295: there are only 6 journals (0 - 5)
[root at node2 ~]#
____________________________________________________________________________________
Now that's room service! Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097
More information about the Linux-cluster
mailing list