[Linux-cluster] Need help for Clustered NFS
haydar Ali
haydar2906 at hotmail.com
Wed Jul 20 16:48:58 UTC 2005
Hi Jacob,
Have I to create a /etc/cluster.conf on the 2nd node and I put the 1st node
as preferred node?
Thanks a lot
Haydar
>From: <JACOB_LIBERMAN at Dell.com>
>Reply-To: linux clustering <linux-cluster at redhat.com>
>To: <linux-cluster at redhat.com>
>Subject: RE: [Linux-cluster] Need help for Clustered NFS
>Date: Wed, 20 Jul 2005 11:44:33 -0500
>
>You need to start the clustering services on the 2nd node so it can join
>the cluster. Otherwise it wont be able to access the disk protected by
>the cluster services on node1. the cluster service controllers when and
>whether the shared disk resources get mounted by a cluster host.
>
>Thanks, jacob
>
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of haydar Ali
> > Sent: Wednesday, July 20, 2005 9:11 AM
> > To: linux-cluster at redhat.com
> > Subject: [Linux-cluster] Need help for Clustered NFS
> >
> > Hi,
> >
> > I want to setup and configure clustered NFS.
> > I have created 2 quorum partitions /dev/sdd2 and /dev/sdd3
> > (100MB each) and formatted them
> >
> > mkfs -t ext2 -b 4096 /dev/sdd2
> > mkfs -t ext2 -b 4096 /dev/sdd3
> >
> > I created another huge partition /dev/sdd4 (over 600GB) and
> > formatted it in
> > ext3 filesystem.
> >
> > I installed the cluster suite on the 1st node (RAC1) and I
> > started the rawdevices on the two nodes RAC1 and RAC2 (it's OK).
> >
> > This the hosts file /etc/host on the node1 (RAC1)
> >
> > # Do not remove the following line, or various programs #
> > that require network functionality will fail.
> > 127.0.0.1 localhost.localdomain localhost
> > #
> > # Private hostnames
> > #
> > 192.168.253.3 rac1.domain.net rac1
> > 192.168.253.4 rac2.domain.net rac2
> > 192.168.253.10 rac1
> > #
> > # Hostnames used for Interconnect
> > #
> > 1.1.1.1 rac1i.domain.net rac1i
> > 1.1.1.2 rac2i.domain.net rac2i
> > #
> > -----------------------
> >
> >
> > I launched the command cluconfig and it generated
> > /etc/cluster.conf, you can list its content:
> >
> > -------------------------------
> > This file is automatically generated. Do not manually edit!
> >
> > [cluhbd]
> > logLevel = 4
> >
> > [clupowerd]
> > logLevel = 4
> >
> > [cluquorumd]
> > logLevel = 4
> >
> > [cluster]
> > alias_ip = 192.168.253.10
> > name = project
> > timestamp = 1121804245
> >
> > [clusvcmgrd]
> > logLevel = 4
> >
> > [database]
> > version = 2.0
> >
> > [members]
> > start member0
> > start chan0
> > name = rac1
> > type = net
> > end chan0
> > id = 0
> > name = rac1
> > powerSwitchIPaddr = rac1
> > powerSwitchPortName = unused
> > quorumPartitionPrimary = /dev/raw/raw1
> > quorumPartitionShadow = /dev/raw/raw2
> > end member0
> > start member1
> > start chan0
> > name = rac2
> > type = net
> > end chan0
> > id = 1
> > name = rac2
> > powerSwitchIPaddr = rac2
> > powerSwitchPortName = unused
> > quorumPartitionPrimary = /dev/raw/raw1
> > quorumPartitionShadow = /dev/raw/raw2
> > end member1
> >
> > [powercontrollers]
> > start powercontroller0
> > IPaddr = rac1
> > login = unused
> > passwd = unused
> > type = null
> > end powercontroller0
> > start powercontroller1
> > IPaddr = rac2
> > login = unused
> > passwd = unused
> > type = null
> > end powercontroller1
> >
> > [services]
> > start service0
> > checkInterval = 30
> > start device0
> > start mount
> > start NFSexports
> > start directory0
> > start client0
> > name = rac1
> > options = rw
> > end client0
> > name = /u04
> > end directory0
> > end NFSexports
> > forceUnmount = yes
> > fstype = ext3
> > name = /u04
> > options = rw,nosuid,sync
> > end mount
> > name = /dev/sdd4
> > sharename = None
> > end device0
> > name = nfs_project
> > preferredNode = rac2
> > relocateOnPreferredNodeBoot = yes
> > end service0
> > ------------------------------------
> >
> > I created a NFS share on /u04 using the following command cluadmin
> >
> > [root at rac1 root]# cluadmin
> > Wed Jul 20 10:02:20 EDT 2005
> >
> > You can obtain help by entering help and one of the following
> > commands:
> >
> > cluster service clear
> > help apropos exit
> > version quit
> > cluadmin> service show
> > 1) state
> > 2) config
> > 3) services
> > service show what? 2
> > 0) nfs_project
> > c) cancel
> >
> > Choose service: 0
> > name: nfs_project
> > preferred node: rac2
> > relocate: yes
> > monitor interval: 30
> > device 0: /dev/sdd4
> > mount point, device 0: /u04
> > mount fstype, device 0: ext3
> > mount options, device 0: rw,nosuid,sync
> > force unmount, device 0: yes
> > samba share, device 0: None
> > NFS export 0: /u04
> > Client 0: rac1, rw
> > cluadmin> service show state
> > ========================= S e r v i c e S t a t u s
> > ========================
> >
> > Last
> > Monitor Restart
> > Service Status Owner Transition
> > Interval Count
> > -------------- -------- -------------- ----------------
> > -------- -------
> > nfs_project started rac1 16:21:23 Jul 19 30 1
> > cluadmin>
> >
> >
> > And when I launched clustat, I expected this error message:
> >
> > clustat
> > Cluster Status Monitor (Fileserver Test Cluster)
> > 07:46:05
> > Cluster alias: rac1
> >
> > ===================== M e m b e r S t a t u s ================
> > Member Status Node Id Power Switch
> > -------------- ---------- ---------- ------------
> > rac1 Up 0 Good
> > rac2 Down 1 Unknown
> >
> > =================== H e a r t b e a t S t a t u s ===============
> > Name Type Status
> > ------------------------------ ---------- ------------
> > rac1 <--> rac2 network OFFLINE
> >
> > =================== S e r v i c e S t a t u s ==================
> > Last
> > Monitor Restart
> > Service Status Owner Transition Interval Count
> >
> > ------------- -------- ------------- ---------------- ------------
> > nfs_project started rac1 16:07:42 Jul 19
> > 30
> > 0
> >
> >
> >
> > And when I launched this command on RAC2:
> > mount -t nfs rac1:/u04 /u04
> > It list the following error message :
> > Mount: rac1:/u04 failed, reason given by server: Permission denied
> >
> > Can someone help me to fix this problem in this configuration?
> >
> > Thanks
> >
> > Cheers!
> >
> > Haydar
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > http://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>--
>Linux-cluster mailing list
>Linux-cluster at redhat.com
>http://www.redhat.com/mailman/listinfo/linux-cluster
More information about the Linux-cluster
mailing list