[Linux-cluster] Not able to add the cluster nodes
ESGLinux
esggrupos at gmail.com
Thu May 21 16:21:49 UTC 2009
Hello,
this thing happened to me this morning.
I try to restart cman with service cman start in node2 and it works,
For some reason your cman on node2 is stopped.
HTH
ESG
2009/5/18 ravi kumar <ravikumar.c25 at gmail.com>
> Thanks Ian Hayes,
>
> Again another problem i m facing. GFS not able to mount.
>
> Please find the details
>
> dbnode1:/root # clustat
> Cluster Status for md_cluster @ Mon May 18 18:15:43 2009
> Member Status: Quorate
>
> Member Name ID Status
> ------ ---- ---- ------
> dbnode1.xtks.com 1 Online
> dbnode2.xtks.com 2 Online, Local
>
>
>
>
>
> dbnode1:/root # gfs_mkfs -t md_cluster:test1 -p lock_dlm -j 2
> /dev/vg_cluster1/test1
> This will destroy any data on /dev/vg_cluster1/test1.
> It appears to contain a gfs2 filesystem.
>
> Are you sure you want to proceed? [y/n] y
>
> Device: /dev/vg_cluster1/test1
> Blocksize: 4096
> Filesystem Size: 5177000
> Journals: 2
> Resource Groups: 80
> Locking Protocol: lock_dlm
> Lock Table: md_cluster:test1
>
> Syncing...
> All Done
> dbnode1:/root
>
> dbnode1:/root # cat /etc/fstab
> /dev/rootvg/rootvol / ext3 defaults 1 1
> /dev/rootvg/varvol /var ext3
> defaults,nosuid 1 2
> /dev/rootvg/homevol /home ext3
> defaults,nosuid 1 2
> /dev/rootvg/optvol /opt ext3 defaults 1 2
> LABEL=/boot /boot ext3
> defaults,nosuid 1 2
> tmpfs /dev/shm tmpfs
> defaults,nosuid 0 0
> devpts /dev/pts devpts gid=5,mode=620 0 0
> sysfs /sys sysfs defaults 0 0
> proc /proc proc defaults 0 0
> /dev/rootvg/swapvol swap swap defaults 0 0
> /dev/cdrom /mnt/cdrom auto pamconsole,exec,noauto,managed 0 0
> /dev/vg_cluster1/test1 /test1 gfs defaults 0 0
>
> dbnode1:/root # mount -a
> /sbin/mount.gfs: error mounting /dev/mapper/vg_cluster1-test1 on
> /test1: No such device
>
> dbnode1:/root # clustat
> Cluster Status for md_cluster @ Mon May 18 18:41:30 2009
> Member Status: Quorate
>
> Member Name ID Status
> ------ ---- ---- ------
> dbnode1.xtks.com 1 Online, Local
> dbnode2.xtks.com 2 Offline
>
> dbnode1:/root #
>
>
>
>
>
>
> dbnode2:/root # clustat
> Cluster Status for md_cluster @ Mon May 18 18:15:52 2009
> Member Status: Quorate
>
> Member Name ID Status
> ------ ---- ---- ------
> dbnode1.xtks.com 1 Online, Local
> dbnode2.xtks.com 2 Online
>
>
>
> dbnode2:/root # mount -a
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: can't connect to gfs_controld: Connection refused
> /sbin/mount.gfs: gfs_controld not running
> /sbin/mount.gfs: error mounting lockproto lock_dlm
> dbnode2:/root #
>
>
>
> dbnode2:/root # clustat
> Could not connect to CMAN: Connection refused
> dbnode2:/root #
>
> On Mon, May 18, 2009 at 3:04 AM, Ian Hayes <cthulhucalling at gmail.com>
> wrote:
> > Try adding clean_start="1" to the fence_daemon line of both members and
> try
> > it again.
> >
> > On Sun, May 17, 2009 at 11:28 AM, ravi kumar <ravikumar.c25 at gmail.com>
> > wrote:
> >>
> >> Hi Linux cluster experts,
> >>
> >> Not able to add as a member in dbnode1 & same as dbnode2 side. Please
> >> help...
> >>
> >> Please find the details as below
> >>
> >>
> >>
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090521/0957343f/attachment.htm>
More information about the Linux-cluster
mailing list