From sghosh at redhat.com Tue Jul 1 00:02:43 2008 From: sghosh at redhat.com (Subhendu Ghosh) Date: Mon, 30 Jun 2008 20:02:43 -0400 Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9 In-Reply-To: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com> References: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com> Message-ID: <48697423.2030305@redhat.com> If you are using ocfs2, then ASM and ASMlib are not required. ASM uses raw disks and ASMlib provides ASM a way to easily recognize said disks. cheers Subhendu Tina Soles wrote: > Hello, > > > > I am attempting to setup an Oracle RAC using these instructions: > http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi_2.html#17 > > > > I am running Fedora 9 with kernel = 2.6.25-14.fc9.i686 > > > > I realize this is probably an ?unsupported? version, but it?s the only > version that I could get to work with my firewire setup, so I cannot > change the kernel. > > ocfs2 is up and running, and now I need to install ASMLib 2.0, but it > appears that there is no rpm distribution for this kernel. Therefore, I > am attempting to build my own, from the source files, > oracleasm-2.0.4.tar.gz. After unzipping and untarring, I run > ./configure and it seems to run fine (see below), but when I try to run > make install it bombs with an error no rule to make target > `oracleasm.ko', needed by `install-oracleasm'. Stop. > > > > I don?t have any experience building rpms from source, so any explicit > instructions you can give me would be much appreciated. Also, does this > source file contain everything I need in order to build the kernel > driver, userspace library, and driver support files, or do I need > separate source files for those? Please forgive my ignorance, as I am > new to this. > > > > Thanks in advance for any help you can give me. > > > > Regards, > > Tina > > > > # ./configure > > checking build system type... i686-pc-linux-gnu > > checking host system type... i686-pc-linux-gnu > > checking for gcc... gcc > > checking for C compiler default output file name... a.out > > checking whether the C compiler works... yes > > checking whether we are cross compiling... no > > checking for suffix of executables... > > checking for suffix of object files... o > > checking whether we are using the GNU C compiler... yes > > checking whether gcc accepts -g... yes > > checking for gcc option to accept ANSI C... none needed > > checking how to run the C preprocessor... gcc -E > > checking for a BSD-compatible install... /usr/bin/install -c > > checking whether ln -s works... yes > > checking for ranlib... ranlib > > checking for ar... /usr/bin/ar > > checking for egrep... grep -E > > checking for ANSI C header files... yes > > checking for an ANSI C-conforming const... yes > > checking for sys/types.h... yes > > checking for sys/stat.h... yes > > checking for stdlib.h... yes > > checking for string.h... yes > > checking for memory.h... yes > > checking for strings.h... yes > > checking for inttypes.h... yes > > checking for stdint.h... yes > > checking for unistd.h... yes > > checking for unsigned long... yes > > checking size of unsigned long... 4 > > checking for vendor... not found > > checking for vendor kernel... not supported > > checking for directory with kernel build tree... > /lib/modules/2.6.25-14.fc9.i686/build > > checking for kernel version... 2.6.25-14.fc9.i686 > > checking for capabilities mask in backing_dev_info... yes > > checking for vfsmount in ->get_sb() helpers... yes > > checking for for mutex API... yes > > checking for for i_private... yes > > checking for for i_blksize... no > > configure: creating ./config.status > > config.status: creating Config.make > > config.status: creating include/linux/oracleasm/module_version.h > > config.status: creating vendor/sles9/oracleasm.spec-generic > > config.status: creating vendor/rhel4/oracleasm.spec-generic > > config.status: creating vendor/fc6/oracleasm.spec-generic > > config.status: creating vendor/sles10/oracleasm.spec-generic > > config.status: creating vendor/rhel5/oracleasm.spec-generic > > config.status: creating vendor/common/oracleasm-headers.spec-generic > > > > # make install > > make -C include install > > make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include' > > make -C linux install > > make[2]: Entering directory > `/root/rpms/source/oracleasm-2.0.4/include/linux' > > make -C oracleasm install > > make[3]: Entering directory > `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm' > > /bin/sh ../../../mkinstalldirs /usr/local/include/linux/oracleasm > > for hdr in abi.h abi_compat.h disk.h error.h manager.h manager_compat.h > kernel.h compat32.h module_version.h; do \ > > /usr/bin/install -c -m 644 $hdr > /usr/local/include/linux/oracleasm/$hdr; \ > > done > > make[3]: Leaving directory > `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm' > > make[2]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include/linux' > > make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include' > > make -C kernel install > > make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/kernel' > > make[1]: *** No rule to make target `oracleasm.ko', needed by > `install-oracleasm'. Stop. > > make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/kernel' > > make: *** [kernel-install] Error 2 > > > > Tina Soles > > Senior Analyst > > > > STR Software > > > > 11505 Allecingie Parkway > Richmond, VA 23235 > email. tina.soles at strsoftware.com > > phone. 804.897.1600 > fax. 804.897.1638 > > web. www.strsoftware.com > > > > > ------------------------------------------------------------------------ > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- Subhendu Ghosh Solutions Architect Red Hat From andreas.schneider at f-it.biz Tue Jul 1 08:02:18 2008 From: andreas.schneider at f-it.biz (Andreas Schneider) Date: Tue, 1 Jul 2008 10:02:18 +0200 Subject: [Linux-cluster] inconsistend volume group after pvmove Message-ID: <003701c8db50$c8fcb390$5af61ab0$@schneider@f-it.biz> Hello, This is our setup: We have 3 Linux servers (2.6.18 Centos 5), clustered, with a clvmd running one ?big? volume group (15 SCSI disks a 69,9 GB). After we got an hardware I/O error on one disk out gfs filesystem began to loop. So we stopped all services and we determined the corrupted disk (/dev/sdh) and my intention was to do the following: - pvmove /dev/sdh - vgreduce my_volumegroup /dev/sdh - do an intensive hardware check on the volume But: that?s what happened during pvmove ?v /dev/sdh: . /dev/sdh: Moved: 78,6% /dev/sdh: Moved: 79,1% /dev/sdh: Moved: 79,7% /dev/sdh: Moved: 80,0% Updating volume group metadata Creating volume group backup "/etc/lvm/backup/myvol_vg" (seqno 46). Error locking on node server1: device-mapper: reload ioctl failed: Das Argument ist ung?ltig Unable to reactivate logical volume "pvmove0" ABORTING: Segment progression failed. Removing temporary pvmove LV Writing out final volume group after pvmove Creating volume group backup "/etc/lvm/backup/myvol_vg" (seqno 48). [root at hpserver1 ~]# pvscan PV /dev/cciss/c0d0p2 VG VolGroup00 lvm2 [33,81 GB / 0 free] PV /dev/sda VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sdb VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sdc VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sdd VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sde VG fit_vg lvm2 [66,75 GB / 46,75 GB free] PV /dev/sdf VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sdg VG fit_vg lvm2 [68,36 GB / 0 free] PV /dev/sdh VG fit_vg lvm2 [68,36 GB / 58,36 GB free] PV /dev/sdj VG fit_vg lvm2 [68,36 GB / 54,99 GB free] PV /dev/sdi VG fit_vg lvm2 [68,36 GB / 15,09 GB free] PV /dev/sdk1 VG fit_vg lvm2 [68,36 GB / 55,09 GB free] Total: 12 [784,20 GB] / in use: 12 [784,20 GB] / in no VG: 0 [0 ] That sounded bad, and I didn?t have any idea what to do, but read, that pvmove can start at the point it was, so I started pvmove againg and now pvmove could move all data. pvscan and vgscan -vvv showed me, that all data were moved from the /dev/sdh volume to the other volumes. To be sure I restarted my cluster nodes, but they encountered problems mounting the gfs filesystems. I got this error: [root at server1 ~]# /etc/init.d/clvmd stop Deactivating VG myvol_vg: Volume group "myvol_vg" inconsistent WARNING: Inconsistent metadata found for VG myvol_vg - updating to use version 148 0 logical volume(s) in volume group "myvol_vg" now active [ OK ] Stopping clvm: [ OK ] [root at server1 ~]# /etc/init.d/clvmd start Starting clvmd: [ OK ] Activating VGs: 2 logical volume(s) in volume group "VolGroup00" now active Volume group "myvol_vg" inconsistent WARNING: Inconsistent metadata found for VG myvol_vg - updating to use version 151 Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtuFZZe8QKoX8sVA0XRTNoDQVWVftk8cSa Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtqDfFtrJTFTGuju8nNjwtCdPGnzP3hh8k Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtc22hBY40phdVvVdFBFX28PvfF7JrlIYz Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtWfJ1EqXJ309gO3Gx0ZvpNekrmHFo9u2V Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtCP6czghnQFEjNdv9DF6bsUmnK3eJ5vKp Error locking on node server1: Volume group for uuid not found: tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBt0KNlnblpwOfcnqIjk4GJ662dxOsL70GF 0 logical volume(s) in volume group "myvol_vg" now active [ OK ] As I take a look at it, these 6 volumes are exactly the LVs which should be found and where all datas are stored. The next step was in the beginning step by step and in the end stupid try and error. This was one of the first actions: [root at hpserver1 ~]# vgreduce --removemissing myvol_vg Logging initialised at Tue Jul 1 10:00:52 2008 Set umask to 0077 Finding volume group "myvol_vg" Wiping cache of LVM-capable devices WARNING: Inconsistent metadata found for VG myvol_vg - updating to use version 229 Volume group "myvol_vg" is already consistent We tried to deactivate the volume via vgchange ?n y myvol_vg, we tried to ?removemissing? and sadly after a few concurrent tries (dmsetup info ?c, dmsetup mknodes and vgchange ?n y myvol_vg) we can access our LVs, but we still get this message and we don?t know why: Volume group "myvol_vg" inconsistent WARNING: Inconsistent metadata found for VG myvol_vg - updating to use version 228 I?m a little bit worried about our data, Regards Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From stevan.colaco at gmail.com Tue Jul 1 09:06:55 2008 From: stevan.colaco at gmail.com (Stevan Colaco) Date: Tue, 1 Jul 2008 12:06:55 +0300 Subject: [Linux-cluster] Cluster doesn't come up while rebooting Message-ID: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com> Hello All, I need your help for one issue i am facing . OS: RHEL4 ES Update 6 64bit I have a deployment where we have 2 + 1 cluster (2 active and one passive). I have a service which is to be failed over but faced issues when i rebooted all 3 servers. Services got disabled. But when i use clusvsadm to manually enable service it works. Here are the logs : - Jun 25 11:13:15 mb1 clurgmgrd[14825]: Resource Group Manager Starting Jun 25 11:13:15 mb1 clurgmgrd[14825]: Loading Service Data Jun 25 11:13:17 mb1 clurgmgrd[14825]: Initializing Services Jun 25 11:13:17 mb1 clurgmgrd: [14825]: /dev/sdh1 is not mounted Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-BACKUP with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-BACKUP returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-STORE with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-STORE returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-DBDATA with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-DBDATA returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-CONF with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CONF returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-REDOLOG with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-REDOLOG returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-INDEX with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-INDEX returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-LOG with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-LOG returned 2 (invalid argument(s)) Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match LABEL=MB2-ZIMBRA-CLUST with a real device Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CLUSTER returned 2 (invalid argument(s)) Jun 25 11:13:22 mb1 clurgmgrd: [14825]: /dev/sdg1 is not mounted Jun 25 11:13:27 mb1 clurgmgrd: [14825]: /dev/sdf1 is not mounted Jun 25 11:13:33 mb1 clurgmgrd: [14825]: /dev/sde1 is not mounted Jun 25 11:13:38 mb1 clurgmgrd: [14825]: /dev/sdd1 is not mounted Jun 25 11:13:43 mb1 clurgmgrd: [14825]: /dev/sdc1 is not mounted Jun 25 11:13:45 mb1 rgmanager: clurgmgrd startup failed Jun 25 11:13:48 mb1 clurgmgrd: [14825]: /dev/sdb1 is not mounted Jun 25 11:13:53 mb1 clurgmgrd: [14825]: /dev/sda1 is not mounted Jun 25 11:13:58 mb1 clurgmgrd[14825]: Services Initialized Jun 25 11:14:01 mb1 clurgmgrd[14825]: Logged in SG "usrm::manager" Jun 25 11:14:01 mb1 clurgmgrd[14825]: Magma Event: Membership Change Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: Local UP Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: mbstandby.ku.edu.kw UP Jun 25 11:14:03 mb1 clurgmgrd[14825]: Magma Event: Membership Change Jun 25 11:14:03 mb1 clurgmgrd[14825]: State change: mb2.ku.edu.kw UP MB2 server Logs Jun 25 11:13:40 mb2 clurgmgrd[14776]: Resource Group Manager Starting Jun 25 11:13:40 mb2 clurgmgrd[14776]: Loading Service Data Jun 25 11:13:41 mb2 clurgmgrd[14776]: Initializing Services Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-DBDATA with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-DBDATA returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-INDEX with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-INDEX returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-LOG with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-LOG returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-CONF with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CONF returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: /dev/sdh1 is not mounted Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-BACKUP with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-BACKUP returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-REDOLOG with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-REDOLOG returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-STORE with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-STORE returned 2 (invalid argument(s)) Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match LABEL=MB1-ZIMBRA-CLUST with a real device Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CLUSTER returned 2 (invalid argument(s)) Jun 25 11:13:46 mb2 clurgmgrd: [14776]: /dev/sdf1 is not mounted Jun 25 11:13:52 mb2 clurgmgrd: [14776]: /dev/sdg1 is not mounted Jun 25 11:13:57 mb2 clurgmgrd: [14776]: /dev/sde1 is not mounted Jun 25 11:14:02 mb2 clurgmgrd: [14776]: /dev/sdd1 is not mounted Jun 25 11:14:07 mb2 clurgmgrd: [14776]: /dev/sdc1 is not mounted Jun 25 11:14:10 mb2 rgmanager: clurgmgrd startup failed Jun 25 11:14:12 mb2 clurgmgrd: [14776]: /dev/sdb1 is not mounted Jun 25 11:14:18 mb2 clurgmgrd: [14776]: /dev/sda1 is not mounted Jun 25 11:14:23 mb2 clurgmgrd[14776]: Services Initialized Jun 25 11:14:25 mb2 clurgmgrd[14776]: Logged in SG "usrm::manager" Jun 25 11:14:25 mb2 clurgmgrd[14776]: Magma Event: Membership Change Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: Local UP Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mb1.ku.edu.kw UP Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mbstandby.ku.edu.kw UP MBSTANDBY LOGS Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Resource Group Manager Starting Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Loading Service Data Jun 25 11:13:27 mbstandby clurgmgrd[15850]: Initializing Services Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdl1 is not mounted Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdp1 is not mounted Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdk1 is not mounted Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdn1 is not mounted Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdj1 is not mounted Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdo1 is not mounted Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdi1 is not mounted Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdm1 is not mounted Jun 25 11:13:47 mbstandby sshd(pam_unix)[17583]: session opened for user root by (uid=0) Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdd1 is not mounted Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdh1 is not mounted Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdg1 is not mounted Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdc1 is not mounted Jun 25 11:13:56 mbstandby rgmanager: clurgmgrd startup failed Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session opened for user zimbra by (uid=0) Jun 25 11:13:56 mbstandby zimbra: -bash: /opt/zimbra/log/startup.log: No such file or directory Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session closed for user zimbra Jun 25 11:13:56 mbstandby rc: Starting zimbra: failed Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdf1 is not mounted Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdb1 is not mounted Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sde1 is not mounted Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sda1 is not mounted Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Services Initialized Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Logged in SG "usrm::manager" Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Magma Event: Membership Change Jun 25 11:14:09 mbstandby clurgmgrd[15850]: State change: Local UP Jun 25 11:14:12 mbstandby clurgmgrd[15850]: Magma Event: Membership Change Jun 25 11:14:12 mbstandby clurgmgrd[15850]: State change: mb1.ku.edu.kw UP Jun 25 11:14:13 mbstandby clurgmgrd[15850]: Resource groups locked; not evaluating Jun 25 11:14:14 mbstandby clurgmgrd[15850]: Magma Event: Membership Change Jun 25 11:14:14 mbstandby clurgmgrd[15850]: State change: mb2.ku.edu.kw UP Jun 25 11:49:22 mbstandby sshd(pam_unix)[9438]: session opened for user root by (uid=0) I am using e2label to mount on failover as well as primary server. Attached also is my cluster.conf. Right now fencing is not being used properly just using manual and was doing tetsing with HP ILO fencing. !st query i have is why does it show "Magma Event: Membership Change" ? Since i have initially defined 3 members in cluster , it should not give me this . Is it because of some package missing or i have to run up2date ? I have installed following packages : - ccs-1.0.11-1.x86_64.rpm cman-kernheaders-2.6.9-53.5.x86_64.rpm gulm-1.0.10-0.x86_64.rpm magma-plugins-1.0.12-0.x86_64.rpm ccs-devel-1.0.11-1.x86_64.rpm dlm-1.0.7-1.x86_64.rpm gulm-devel-1.0.10-0.x86_64.rpm perl-Net-Telnet-3.03-3.noarch.rpm cman-1.0.17-0.x86_64.rpm dlm-devel-1.0.7-1.x86_64.rpm iddev-2.0.0-4.x86_64.rpm rgmanager-1.9.72-1.x86_64.rpm cman-devel-1.0.17-0.x86_64.rpm dlm-kernel-2.6.9-52.2.x86_64.rpm iddev-devel-2.0.0-4.x86_64.rpm system-config-cluster-1.0.51-2.0.noarch.rpm cman-kernel-2.6.9-53.5.x86_64.rpm dlm-kernel-smp-2.6.9-52.2.x86_64.rpm luci-0.11.0-3.x86_64.rpm cman-kernel-smp-2.6.9-53.5.x86_64.rpm fence-1.32.50-2.x86_64.rpm magma-1.0.8-1.x86_64.rpm Should i be missing any other important package for cluster ? I installed packages using rpm -ivh *.rpm . Also i stopped lock_glumd service as i am using lock_dlm lock manager. Later i tried using just IP in service part w/o mount points and application service. Then also on reboot it doesnt startup.Here are the logs :- Jun 27 19:44:37 mb1 clurgmgrd[12737]: Resource Group Manager Starting Jun 27 19:44:37 mb1 clurgmgrd[12737]: Loading Service Data Jun 27 19:44:37 mb1 fstab-sync[12738]: removed all generated mount points Jun 27 19:44:38 mb1 clurgmgrd[12737]: Initializing Services Jun 27 19:44:38 mb1 clurgmgrd[12737]: Services Initialized Jun 27 19:44:38 mb1 clurgmgrd[12737]: Logged in SG "usrm::manager" Jun 27 19:44:38 mb1 clurgmgrd[12737]: Magma Event: Membership Change Jun 27 19:44:38 mb1 clurgmgrd[12737]: State change: Local UP Jun 27 19:44:38 mb1 rgmanager: clurgmgrd startup succeeded Jun 27 19:44:41 mb1 clurgmgrd[12737]: Magma Event: Membership Change Jun 27 19:44:41 mb1 clurgmgrd[12737]: State change: mbstandby.ku.edu.kw UP Jun 27 19:44:43 mb1 clurgmgrd[12737]: Magma Event: Membership Change Jun 27 19:44:43 mb1 clurgmgrd[12737]: State change: mb2.ku.edu.kw UP Attached is also cluster.conf for this Please guide what could be the issue. Thanks in advance. Regards, -Steven -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cluster-with-IP.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: cluster-with-service.txt URL: From gspiegl at gmx.at Tue Jul 1 10:58:41 2008 From: gspiegl at gmx.at (Gerhard Spiegl) Date: Tue, 01 Jul 2008 12:58:41 +0200 Subject: [Linux-cluster] takeover, fencing & failback Message-ID: <486A0DE1.6010609@gmx.at> Hi all, I'm working on a two node cluster (RHEL 5.2 + RHCS) with one XEN virtual machine per node: node1 => VM1 node2 => VM2 When node1 takes over VM2 via the command: clusvcadm -M vm:VM2 -m node1 node2 gets fenced after takeover is done, which is probably expected behaviour. As node2 comes up again it fetches his VM2 back (nofailback="0", but also fences node1 (ipmilan) where VM1 is still running an therefore interrupted and restartet on node2. When node1 comes up the same game in the other direction begins. Is there a way to avoid this fence loop? In other words: can a service be migrated from node1 to node2 without other services that run on node1 being interrupted? thanks & regards Gerhard -------------- next part -------------- A non-text attachment was scrubbed... Name: cluster.conf Type: text/xml Size: 2291 bytes Desc: not available URL: From egraeler at commvault.com Tue Jul 1 13:51:21 2008 From: egraeler at commvault.com (Ernie Graeler) Date: Tue, 1 Jul 2008 09:51:21 -0400 Subject: [Linux-cluster] GFS2 not releasing disk space? Message-ID: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com> All, I'm new to this list, so I'm not sure if any one else has encountered this problem. Also, this is my first post so forgive me if I do something incorrect. :-) I've created a cluster using 2 nodes and created a shared file system between them using gfs2. So far, the set up seemed to go well, and I can see the file system, and can write to it and copy files to it with no problem from either node. However, when I delete or remove files and directories from the gfs2 file system, the files and directories go away, but the file system does not reclaim the space from the deleted files. Is there a tunable parameter that handles this? Or did I miss something in the configuration? Has any one else encountered this situation? If I restart the cluster, the space comes back, but I don't want to have to restart the cluster every time I delete data in order to reclaim the space. I'm running gfs2 on CentOS 5.1 X64. I did a google search but came up dry. Thanks! Ernie Ernst F. Graeler Systems Analyst/UnixDB Team Supervisor CommVault Customer Support Direct: 732.870.4059 Hotline: 877.780.3077 egraeler at commvault.com ******************Legal Disclaimer*************************** "This communication may contain confidential and privileged material for the sole use of the intended recipient. Any unauthorized review, use or distribution by others is strictly prohibited. If you have received the message in error, please advise the sender by reply email and delete the message. Thank you." **************************************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2327 bytes Desc: image001.jpg URL: From swhiteho at redhat.com Tue Jul 1 13:53:41 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Tue, 01 Jul 2008 14:53:41 +0100 Subject: [Linux-cluster] GFS2 not releasing disk space? In-Reply-To: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com> References: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com> Message-ID: <1214920422.4011.82.camel@quoit> Hi, Thats an ancient version of GFS2, please use something more recent such as the current Fedora kernel, Steve. On Tue, 2008-07-01 at 09:51 -0400, Ernie Graeler wrote: > All, > > > > I?m new to this list, so I?m not sure if any one else has encountered > this problem. Also, this is my first post so forgive me if I do > something incorrect. J I?ve created a cluster using 2 nodes and > created a shared file system between them using gfs2. So far, the > set up seemed to go well, and I can see the file system, and can write > to it and copy files to it with no problem from either node. > However, when I delete or remove files and directories from the gfs2 > file system, the files and directories go away, but the file system > does not reclaim the space from the deleted files. Is there a tunable > parameter that handles this? Or did I miss something in the > configuration? Has any one else encountered this situation? If I > restart the cluster, the space comes back, but I don?t want to have to > restart the cluster every time I delete data in order to reclaim the > space. I?m running gfs2 on CentOS 5.1 X64. I did a google search but > came up dry. > > > > Thanks! > > Ernie > > > > > > > Ernst F. Graeler > > > Systems Analyst/UnixDB Team > Supervisor > > > CommVault Customer Support > > > Direct: 732.870.4059 > > > Hotline: 877.780.3077 > > > egraeler at commvault.com > > > > > > > > > > > > > > > > > > > ******************Legal Disclaimer*************************** > "This communication may contain confidential and privileged material > for the sole use of the intended recipient. Any unauthorized review, > use or distribution by others is strictly prohibited. If you have > received the message in error, please advise the sender by reply > email and delete the message. Thank you." > **************************************************************** > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From jruemker at redhat.com Tue Jul 1 14:42:51 2008 From: jruemker at redhat.com (John Ruemker) Date: Tue, 01 Jul 2008 10:42:51 -0400 Subject: [Linux-cluster] takeover, fencing & failback In-Reply-To: <486A0DE1.6010609@gmx.at> References: <486A0DE1.6010609@gmx.at> Message-ID: <486A426B.20807@redhat.com> Gerhard Spiegl wrote: > Hi all, > > I'm working on a two node cluster (RHEL 5.2 + RHCS) with one > XEN virtual machine per node: > > node1 => VM1 > node2 => VM2 > > When node1 takes over VM2 via the command: > > clusvcadm -M vm:VM2 -m node1 > > node2 gets fenced after takeover is done, which is probably expected behaviour. > This is not expected. The vm should migrate and both nodes should continue running. > As node2 comes up again it fetches his VM2 back (nofailback="0", but also > fences node1 (ipmilan) where VM1 is still running an therefore interrupted and > restartet on node2. > When node1 comes up the same game in the other direction begins. > Is there a way to avoid this fence loop? > > In other words: can a service be migrated from node1 to node2 without other > services that run on node1 being interrupted? > Are both nodes successfully joined in the cluster? What does 'cman_tool nodes' say? Can you attach logs showing all of this happening? John From l.dardini at comune.prato.it Tue Jul 1 17:13:47 2008 From: l.dardini at comune.prato.it (Leandro Dardini) Date: Tue, 1 Jul 2008 19:13:47 +0200 Subject: R: [Linux-cluster] Homebrew NAS Cluster References: Message-ID: <6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local> I am running a home-brew NAS Cluster for a medium sized ISP. It is run with a pair of Dell PowerEdge 2900 with 1 Terabyte of filesystem exported via NFS to 4 nodes running apache, exim and imap/pop3 services. Filesystem is made on top of drbd in a active/backup setup with heartbeat. Performance are good, but can be better with more memory on nfs node and faster disks. I don't know VMware very well, but I run other virtualization solutions, like QEMU. Do you plan to mount the NFS from inside the virtual machine or create a virtual disk on an exported NFS filesystem? Leandro -----Messaggio originale----- Da: linux-cluster-bounces at redhat.com per conto di Stephen Nelson-Smith Inviato: lun 30/06/2008 23.56 A: linux clustering Oggetto: [Linux-cluster] Homebrew NAS Cluster Hi all, I'm in the process of setting up a virtualisation farm which will have 50-60 virtual machines, running a wide range of web, application and database applications, all on top of vmware vi3. My budget won't stretch to a commercial NAS solution, so it's either a SAN, which could get complicated and hard to manage with so many nodes, or a home-brew NAS solution. Has anyone done this, on the list? I'm wondering what the catch is? I'm thinking all I need to do is run NFS on top of a clustered filesystem, and export to ESX. I could use some pointers, gotchas, ideas and experiences. Thanks! S. -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3396 bytes Desc: not available URL: From bkyoung at gmail.com Tue Jul 1 18:16:59 2008 From: bkyoung at gmail.com (Brandon Young) Date: Tue, 1 Jul 2008 13:16:59 -0500 Subject: R: [Linux-cluster] Homebrew NAS Cluster In-Reply-To: <6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local> References: <6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local> Message-ID: <824ffea00807011116m69c61eb5l4d99318093900a30@mail.gmail.com> Yeah, similar question to the first responder ... Is your intent to have shared disk space between all the ESX servers? To support live migrations, etc? If so, then ESX server has a built-in filesystem called vmfs, which can be shared by all the servers in the farm to store VM images, etc. We use it at my place of employment. It's just SAN disk volumes shared by all the ESX servers. If you're looking for common storage to be shared and accessed among all the virtual machines, then an NFS farm might be what you're looking for; maybe it's unnecessary, though. I have a GFS storage cluster where four machines export the same data to user land. Actually, I have one server handling all the user space NFS needs (about 50 clients), and it isn't even breathing hard. I have two other NFS servers facing an HPC cluster with 300 client machines. I also have a Samba server serving out all this same data to user land, too, and it is underchallenged as well, with perhaps 100 clients. So, depending on how much traffic you would need to sustain, it may not even require a cluster of NFS servers to achieve your goals. If that's what you need, though, then a homebrwed NAS solution where the data is stored on a clustered filesystem is certainly an option worth considering. 2008/7/1 Leandro Dardini : > I am running a home-brew NAS Cluster for a medium sized ISP. It is run with > a pair of Dell PowerEdge 2900 with 1 Terabyte of filesystem exported via NFS > to 4 nodes running apache, exim and imap/pop3 services. Filesystem is made > on top of drbd in a active/backup setup with heartbeat. Performance are > good, but can be better with more memory on nfs node and faster disks. > > I don't know VMware very well, but I run other virtualization solutions, > like QEMU. Do you plan to mount the NFS from inside the virtual machine or > create a virtual disk on an exported NFS filesystem? > > Leandro > > > -----Messaggio originale----- > Da: linux-cluster-bounces at redhat.com per conto di Stephen Nelson-Smith > Inviato: lun 30/06/2008 23.56 > A: linux clustering > Oggetto: [Linux-cluster] Homebrew NAS Cluster > > Hi all, > > I'm in the process of setting up a virtualisation farm which will have > 50-60 virtual machines, running a wide range of web, application and > database applications, all on top of vmware vi3. > > My budget won't stretch to a commercial NAS solution, so it's either a > SAN, which could get complicated and hard to manage with so many > nodes, or a home-brew NAS solution. > > Has anyone done this, on the list? I'm wondering what the catch is? > I'm thinking all I need to do is run NFS on top of a clustered > filesystem, and export to ESX. > > I could use some pointers, gotchas, ideas and experiences. > > Thanks! > > S. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at ntsg.umt.edu Tue Jul 1 18:35:39 2008 From: andrew at ntsg.umt.edu (Andrew A. Neuschwander) Date: Tue, 01 Jul 2008 12:35:39 -0600 Subject: [Linux-cluster] Homebrew NAS Cluster In-Reply-To: References: Message-ID: <486A78FB.9080100@ntsg.umt.edu> My setup sounds similar to yours but with a SAN for all the underlying storage. I have a large FC SAN (might be cost prohibitive for you), and three physical (Dell PE1500s) servers. Two of them are running ESX 3.5 and one is running CentOS. The ESX Servers share a chunk of SAN using VMFS3. The rest of the san is shared by all three physical servers. I have a handful of virtual CentOS servers to which I've installed the shared SAN luns via raw device mapping (with the scsi bus' set in physical sharing mode). I then put the physical and virtual CentOS machines in one GFS cluster to share the san (using a custom fence script). While this all works and is in production, the performance isn't what I'd like. Locking calls by the virtual centos machines really slow things down, especially when running samba on a vm. I think it's the nature of GFS being exacerbated by all the abstraction of ESX. It takes quite a bit of tuning. The biggest caveat for ESX users is that putting a virtual machine's scsi bus in physical shared-bus mode, disables DRS and VMotion. You can't live migrate these machines. The HA feature still works well though. -A -- Andrew A. Neuschwander, RHCE Linux Systems/Software Engineer College of Forestry and Conservation The University of Montana http://www.ntsg.umt.edu andrew at ntsg.umt.edu - 406.243.6310 Stephen Nelson-Smith wrote: > Hi all, > > I'm in the process of setting up a virtualisation farm which will have > 50-60 virtual machines, running a wide range of web, application and > database applications, all on top of vmware vi3. > > My budget won't stretch to a commercial NAS solution, so it's either a > SAN, which could get complicated and hard to manage with so many > nodes, or a home-brew NAS solution. > > Has anyone done this, on the list? I'm wondering what the catch is? > I'm thinking all I need to do is run NFS on top of a clustered > filesystem, and export to ESX. > > I could use some pointers, gotchas, ideas and experiences. > > Thanks! > > S. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > From jerlyon at gmail.com Tue Jul 1 18:57:48 2008 From: jerlyon at gmail.com (Jeremy Lyon) Date: Tue, 1 Jul 2008 12:57:48 -0600 Subject: [Linux-cluster] IP resource behavior Message-ID: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com> Hi, We noticed today that if we manually remove an IP via ip a del /32 dev bond0 that the service does not detect this and does not cause a fail over. Shouldn't the service be statusing the IP resource to make sure it is configured and up? We do have the monitor link option enabled. This is cluster 2 on RHEL 5.1 TIA Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From theophanis_kontogiannis at yahoo.gr Wed Jul 2 06:39:44 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Wed, 2 Jul 2008 09:39:44 +0300 Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9 In-Reply-To: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com> References: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com> Message-ID: <001c01c8dc0e$6baaaee0$43000ca0$@gr> Hello, Just a tip. Though obviously I do not know your exact FireWire setup, I ended up with Centos 5 and kernel 2.6.18-92.1.6.el5.centos.plus were firewire works perfectly especially for TCP/IP over Ether over Firewire. Sincerely, T.K. From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Tina Soles Sent: Tuesday, July 01, 2008 1:32 AM To: linux-cluster at redhat.com Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9 Hello, I am attempting to setup an Oracle RAC using these instructions: http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi_2.html#1 7 I am running Fedora 9 with kernel = 2.6.25-14.fc9.i686 I realize this is probably an "unsupported" version, but it's the only version that I could get to work with my firewire setup, so I cannot change the kernel. ocfs2 is up and running, and now I need to install ASMLib 2.0, but it appears that there is no rpm distribution for this kernel. Therefore, I am attempting to build my own, from the source files, oracleasm-2.0.4.tar.gz. After unzipping and untarring, I run ./configure and it seems to run fine (see below), but when I try to run make install it bombs with an error no rule to make target `oracleasm.ko', needed by `install-oracleasm'. Stop. I don't have any experience building rpms from source, so any explicit instructions you can give me would be much appreciated. Also, does this source file contain everything I need in order to build the kernel driver, userspace library, and driver support files, or do I need separate source files for those? Please forgive my ignorance, as I am new to this. Thanks in advance for any help you can give me. Regards, Tina # ./configure checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking how to run the C preprocessor... gcc -E checking for a BSD-compatible install... /usr/bin/install -c checking whether ln -s works... yes checking for ranlib... ranlib checking for ar... /usr/bin/ar checking for egrep... grep -E checking for ANSI C header files... yes checking for an ANSI C-conforming const... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for unsigned long... yes checking size of unsigned long... 4 checking for vendor... not found checking for vendor kernel... not supported checking for directory with kernel build tree... /lib/modules/2.6.25-14.fc9.i686/build checking for kernel version... 2.6.25-14.fc9.i686 checking for capabilities mask in backing_dev_info... yes checking for vfsmount in ->get_sb() helpers... yes checking for for mutex API... yes checking for for i_private... yes checking for for i_blksize... no configure: creating ./config.status config.status: creating Config.make config.status: creating include/linux/oracleasm/module_version.h config.status: creating vendor/sles9/oracleasm.spec-generic config.status: creating vendor/rhel4/oracleasm.spec-generic config.status: creating vendor/fc6/oracleasm.spec-generic config.status: creating vendor/sles10/oracleasm.spec-generic config.status: creating vendor/rhel5/oracleasm.spec-generic config.status: creating vendor/common/oracleasm-headers.spec-generic # make install make -C include install make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include' make -C linux install make[2]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include/linux' make -C oracleasm install make[3]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm' /bin/sh ../../../mkinstalldirs /usr/local/include/linux/oracleasm for hdr in abi.h abi_compat.h disk.h error.h manager.h manager_compat.h kernel.h compat32.h module_version.h; do \ /usr/bin/install -c -m 644 $hdr /usr/local/include/linux/oracleasm/$hdr; \ done make[3]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm' make[2]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include/linux' make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include' make -C kernel install make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/kernel' make[1]: *** No rule to make target `oracleasm.ko', needed by `install-oracleasm'. Stop. make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/kernel' make: *** [kernel-install] Error 2 Tina Soles Senior Analyst STR Software 11505 Allecingie Parkway Richmond, VA 23235 email. tina.soles at strsoftware.com phone. 804.897.1600 fax. 804.897.1638 web. www.strsoftware.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 3308 bytes Desc: not available URL: From theophanis_kontogiannis at yahoo.gr Wed Jul 2 10:20:43 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Wed, 2 Jul 2008 13:20:43 +0300 Subject: [Linux-cluster] Problem with GFS2 - Kernel Panic - Can NOT erase directory In-Reply-To: <008201c8dac0$e0aae010$a200a030$@gr> References: <008201c8dac0$e0aae010$a200a030$@gr> Message-ID: <003401c8dc2d$4a7c0560$df741020$@gr> Hello again, Becoming queries why only once service fails, I tried to encircle the root cause. I ended up that files in only one directory (were the failing service keeps its files), are corrupted. Trying to ls -l in the directory gives the following output: ls: reading directory .: Input/output error total 192 ?--------- ? ? ? ? ? account_boinc.bakerlab.org_rosetta.xml ?--------- ? ? ? ? ? account_climateprediction.net.xml ?--------- ? ? ? ? ? account_predictor.chem.lsa.umich.edu.xml ?--------- ? ? ? ? ? all_projects_list.xml -rw-r--r-- 1 boinc boinc 159796 Jun 22 22:47 client_state_prev.xml ?--------- ? ? ? ? ? client_state.xml -rw-r--r-- 1 boinc boinc 5141 Jun 13 23:21 get_current_version.xml ?--------- ? ? ? ? ? get_project_config.xml -rw-r--r-- 1 boinc boinc 899 Apr 4 17:06 global_prefs.xml ?--------- ? ? ? ? ? gui_rpc_auth.cfg ?--------- ? ? ? ? ? job_log_boinc.bakerlab.org_rosetta.txt ?--------- ? ? ? ? ? job_log_predictor.chem.lsa.umich.edu.txt ?--------- ? ? ? ? ? lockfile ?--------- ? ? ? ? ? lookup_account.xml ?--------- ? ? ? ? ? lookup_website.html ?--------- ? ? ? ? ? master_boinc.bakerlab.org_rosetta.xml ?--------- ? ? ? ? ? master_climateprediction.net.xml ?--------- ? ? ? ? ? master_predictor.chem.lsa.umich.edu.xml ?--------- ? ? ? ? ? projects ?--------- ? ? ? ? ? sched_reply_boinc.bakerlab.org_rosetta.xml ?--------- ? ? ? ? ? sched_reply_climateprediction.net.xml ?--------- ? ? ? ? ? sched_reply_predictor.chem.lsa.umich.edu.xml ?--------- ? ? ? ? ? sched_request_boinc.bakerlab.org_rosetta.xml -rw-r--r-- 1 boinc boinc 6766 Jun 22 21:27 sched_request_climateprediction.net.xml ?--------- ? ? ? ? ? sched_request_predictor.chem.lsa.umich.edu.xml ?--------- ? ? ? ? ? slots ?--------- ? ? ? ? ? statistics_boinc.bakerlab.org_rosetta.xml ?--------- ? ? ? ? ? statistics_climateprediction.net.xml ?--------- ? ? ? ? ? statistics_predictor.chem.lsa.umich.edu.xml ?--------- ? ? ? ? ? stderrdae.txt ?--------- ? ? ? ? ? stdoutdae.txt ?--------- ? ? ? ? ? time_stats_log At the same moment the kernel reports what is following below (attached the previous e-mail). Trying to rm -rf the directory fails with the same kernel message. Any ideas on how to erase the problematic directory? Also the other node (the one on which I do not try to make any actions on the file system in question, gives the following message: GFS2: fsid=tweety:gfs2-00.0: jid=1: Trying to acquire journal lock... GFS2: fsid=tweety:gfs2-00.0: jid=1: Busy And the file system becomes inaccessible forever. Any one knows why is that? Thank you all for your time T. Kontogiannis From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Theophanis Kontogiannis Sent: Monday, June 30, 2008 5:52 PM To: 'linux clustering' Subject: [Linux-cluster] Problem with GFS2 - Kernel Panic Hello all, I have a two node cluster with DRBD running in Primary/Primary. Both nodes are running: ? Kernel 2.6.18-92.1.6.el5.centos.plus ? GFS2 fsck 0.1.44 ? cman_tool 2.0.84 ? Cluster LVM daemon version: 2.02.32-RHEL5 (2008-03-04) Protocol version: 0.2.1 ? DRBD Version: 8.2.6 (api:88) After a corruption (which was the result of combining updating and rebooting with the FS mounted, network interruption during the reboot and like issues, I keep on getting the following on one node: Jun 30 00:13:40 tweety1 clurgmgrd[5283]: stop on script "BOINC" returned 1 (generic error) Jun 30 00:13:40 tweety1 clurgmgrd[5283]: Services Initialized Jun 30 00:13:40 tweety1 clurgmgrd[5283]: State change: Local UP Jun 30 00:13:45 tweety1 clurgmgrd[5283]: Starting stopped service service:BOINC-t1 Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: fatal: invalid metadata block Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: bh = 21879736 (magic number) Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: function = gfs2_meta_indirect_buffer, file = fs/gfs2/meta_io.c, line = 332 Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: about to withdraw this file system Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: telling LM to withdraw Jun 30 00:13:46 tweety1 clurgmgrd[5283]: Service service:BOINC-t1 started Jun 30 00:13:46 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: withdrawn Jun 30 00:13:46 tweety1 kernel: Jun 30 00:13:46 tweety1 kernel: Call Trace: Jun 30 00:13:46 tweety1 kernel: [] :gfs2:gfs2_lm_withdraw+0xc1/0xd0 Jun 30 00:13:46 tweety1 kernel: [] __wait_on_bit+0x60/0x6e Jun 30 00:13:46 tweety1 kernel: [] sync_buffer+0x0/0x3f Jun 30 00:13:46 tweety1 kernel: [] out_of_line_wait_on_bit+0x6c/0x78 Jun 30 00:13:46 tweety1 kernel: [] wake_bit_function+0x0/0x23 Jun 30 00:13:46 tweety1 kernel: [] :gfs2:gfs2_meta_check_ii+0x2c/0x38 Jun 30 00:13:46 tweety1 kernel: [] :gfs2:gfs2_meta_indirect_buffer+0x104/0x15e Jun 30 00:13:46 tweety1 kernel: [] :gfs2:gfs2_inode_refresh+0x22/0x2ca Jun 30 00:13:46 tweety1 kernel: [] wake_bit_function+0x0/0x23 Jun 30 00:13:46 tweety1 kernel: [] :gfs2:inode_go_lock+0x29/0x57 Jun 30 00:13:47 tweety1 kernel: [] :gfs2:glock_wait_internal+0x1d4/0x23f Jun 30 00:13:47 tweety1 kernel: [] :gfs2:gfs2_glock_nq+0x1ae/0x1d4 Jun 30 00:13:47 tweety1 kernel: [] :gfs2:gfs2_lookup+0x58/0xa7 Jun 30 00:13:47 tweety1 kernel: [] :gfs2:gfs2_lookup+0x50/0xa7 Jun 30 00:13:47 tweety1 kernel: [] d_alloc+0x174/0x1a9 Jun 30 00:13:47 tweety1 kernel: [] do_lookup+0xd3/0x1d4 Jun 30 00:13:47 tweety1 kernel: [] __link_path_walk+0xa01/0xf42 Jun 30 00:13:47 tweety1 kernel: [] :gfs2:compare_dents+0x0/0x57 Jun 30 00:13:47 tweety1 kernel: [] link_path_walk+0x5c/0xe5 Jun 30 00:13:47 tweety1 kernel: [] :gfs2:gfs2_glock_put+0x26/0x133 After that, the machine freezes completely. The only way to recover is to power-cycle / reset. "gfs2-fsck -vy /dev/mapper/vg0-data0" ends (not terminates, it just look like it finishes) with: Pass5 complete Writing changes to disk gfs2_fsck: buffer still held for block: 21875415 (0x14dcad7) After remounting the file system and having a service start (that has its files on this gfs2 filesystem), the kernel again crasses with the same message and the node freezes up. Unfortunately due to bad handling, I failed to DRBD invalidate the problematic node, and instead of making it sync target (which theoretically would solve the problem, since the good node, would sync the bad node). Instead I made the bad node, sync source and now both nodes have the same issue L Any ideas of how can I resolve this issue? Sincerely, Theophanis Kontogiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From gspiegl at gmx.at Wed Jul 2 13:51:05 2008 From: gspiegl at gmx.at (Gerhard Spiegl) Date: Wed, 02 Jul 2008 15:51:05 +0200 Subject: [Linux-cluster] takeover, fencing & failback In-Reply-To: <486A426B.20807@redhat.com> References: <486A0DE1.6010609@gmx.at> <486A426B.20807@redhat.com> Message-ID: <486B87C9.8040909@gmx.at> John Ruemker wrote: > Gerhard Spiegl wrote: >> Hi all, >> >> I'm working on a two node cluster (RHEL 5.2 + RHCS) with one >> XEN virtual machine per node: >> >> node1 => VM1 >> node2 => VM2 >> >> When node1 takes over VM2 via the command: >> >> clusvcadm -M vm:VM2 -m node1 >> >> node2 gets fenced after takeover is done, which is probably expected >> behaviour. >> > > This is not expected. The vm should migrate and both nodes should > continue running. > >> As node2 comes up again it fetches his VM2 back (nofailback="0", but also >> fences node1 (ipmilan) where VM1 is still running an therefore >> interrupted and restartet on node2. >> When node1 comes up the same game in the other direction begins. >> Is there a way to avoid this fence loop? >> >> In other words: can a service be migrated from node1 to node2 without >> other >> services that run on node1 being interrupted? >> > > Are both nodes successfully joined in the cluster? What does 'cman_tool > nodes' say? Can you attach logs showing all of this happening? > Hi, cman_tool node before an during the migration: [root at ols011p ~]# cman_tool nodes Node Sts Inc Joined Name 0 M 0 2008-07-02 12:50:31 /dev/mapper/HDS-00F9p2 1 M 1228 2008-07-02 12:50:19 ols011p.ops.ctbto.org 2 M 1232 2008-07-02 12:50:19 ols012p.ops.ctbto.org [root at ols012p ~]# cman_tool nodes Node Sts Inc Joined Name 0 M 0 2008-07-02 12:50:32 /dev/mapper/HDS-00F9p2 1 M 1232 2008-07-02 12:50:19 ols011p.ops.ctbto.org 2 M 1224 2008-07-02 12:49:51 ols012p.ops.ctbto.org everything seems fine. The logs are attached in seperate files. thanks Gerhard > > John > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: OLS011_log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: OLS012_log URL: From jbrassow at redhat.com Wed Jul 2 14:51:43 2008 From: jbrassow at redhat.com (Jonathan Brassow) Date: Wed, 2 Jul 2008 09:51:43 -0500 Subject: [Linux-cluster] Cluster doesn't come up while rebooting In-Reply-To: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com> References: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com> Message-ID: I wouldn't worry about the "Magma Event: Membership Change" messages. I think that get printed out whenever a machine joins or leaves the cluster. (You have to be part of the cluster to see the changes... which is why everyone sees local change first, followed by whoever comes after them.) Do you have syslog set to print out 'debug'? That may explain some of these messages... Just to get this straight, after all machines are up, if you use 'clusvcadm' to start the services, it works? If you reboot all machines, it doesn't work on bootup? What if you just reboot one machine? Someone will have to confirm my next few statements, but this is what I think is happening... rgmanager does a 'stop' when a machine comes up. I'm guessing this is why you are seeing the "is not mounted" and other messages. In your cluster.conf, you have the services set to 'autostart="0"', which means they will not start by default(?). So, you need to start by hand when the machines come up. Potential solution is to ignore the messages you've attached (or figure out why syslog is being so verbose), and take out the 'autostart="0"' from cluster.conf. brassow On Jul 1, 2008, at 4:06 AM, Stevan Colaco wrote: > Hello All, > > I need your help for one issue i am facing . > > OS: RHEL4 ES Update 6 64bit > > I have a deployment where we have 2 + 1 cluster (2 active and one > passive). I have a service which is to be failed over but faced issues > when i rebooted all 3 servers. Services got disabled. But when i use > clusvsadm to manually enable service it works. Here are the logs : - > > Jun 25 11:13:15 mb1 clurgmgrd[14825]: Resource Group Manager Starting > Jun 25 11:13:15 mb1 clurgmgrd[14825]: Loading Service Data > Jun 25 11:13:17 mb1 clurgmgrd[14825]: Initializing Services > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: /dev/sdh1 is not mounted > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-BACKUP with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-BACKUP returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-STORE with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-STORE returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-DBDATA with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-DBDATA returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-CONF with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CONF returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-REDOLOG with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-REDOLOG returned > 2 (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-INDEX with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-INDEX returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-LOG with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-LOG returned 2 > (invalid argument(s)) > Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match > LABEL=MB2-ZIMBRA-CLUST with a real device > Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CLUSTER returned > 2 (invalid argument(s)) > Jun 25 11:13:22 mb1 clurgmgrd: [14825]: /dev/sdg1 is not mounted > Jun 25 11:13:27 mb1 clurgmgrd: [14825]: /dev/sdf1 is not mounted > Jun 25 11:13:33 mb1 clurgmgrd: [14825]: /dev/sde1 is not mounted > Jun 25 11:13:38 mb1 clurgmgrd: [14825]: /dev/sdd1 is not mounted > Jun 25 11:13:43 mb1 clurgmgrd: [14825]: /dev/sdc1 is not mounted > Jun 25 11:13:45 mb1 rgmanager: clurgmgrd startup failed > Jun 25 11:13:48 mb1 clurgmgrd: [14825]: /dev/sdb1 is not mounted > Jun 25 11:13:53 mb1 clurgmgrd: [14825]: /dev/sda1 is not mounted > Jun 25 11:13:58 mb1 clurgmgrd[14825]: Services Initialized > Jun 25 11:14:01 mb1 clurgmgrd[14825]: Logged in SG "usrm::manager" > Jun 25 11:14:01 mb1 clurgmgrd[14825]: Magma Event: Membership Change > Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: Local UP > Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: > mbstandby.ku.edu.kw UP > Jun 25 11:14:03 mb1 clurgmgrd[14825]: Magma Event: Membership Change > Jun 25 11:14:03 mb1 clurgmgrd[14825]: State change: mb2.ku.edu.kw UP > > > MB2 server Logs > > Jun 25 11:13:40 mb2 clurgmgrd[14776]: Resource Group Manager Starting > Jun 25 11:13:40 mb2 clurgmgrd[14776]: Loading Service Data > Jun 25 11:13:41 mb2 clurgmgrd[14776]: Initializing Services > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-DBDATA with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-DBDATA returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-INDEX with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-INDEX returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-LOG with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-LOG returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-CONF with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CONF returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: /dev/sdh1 is not mounted > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-BACKUP with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-BACKUP returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-REDOLOG with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-REDOLOG returned > 2 (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-STORE with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-STORE returned 2 > (invalid argument(s)) > Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match > LABEL=MB1-ZIMBRA-CLUST with a real device > Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CLUSTER returned > 2 (invalid argument(s)) > Jun 25 11:13:46 mb2 clurgmgrd: [14776]: /dev/sdf1 is not mounted > Jun 25 11:13:52 mb2 clurgmgrd: [14776]: /dev/sdg1 is not mounted > Jun 25 11:13:57 mb2 clurgmgrd: [14776]: /dev/sde1 is not mounted > Jun 25 11:14:02 mb2 clurgmgrd: [14776]: /dev/sdd1 is not mounted > Jun 25 11:14:07 mb2 clurgmgrd: [14776]: /dev/sdc1 is not mounted > Jun 25 11:14:10 mb2 rgmanager: clurgmgrd startup failed > Jun 25 11:14:12 mb2 clurgmgrd: [14776]: /dev/sdb1 is not mounted > Jun 25 11:14:18 mb2 clurgmgrd: [14776]: /dev/sda1 is not mounted > Jun 25 11:14:23 mb2 clurgmgrd[14776]: Services Initialized > Jun 25 11:14:25 mb2 clurgmgrd[14776]: Logged in SG "usrm::manager" > Jun 25 11:14:25 mb2 clurgmgrd[14776]: Magma Event: Membership Change > Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: Local UP > Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mb1.ku.edu.kw UP > Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: > mbstandby.ku.edu.kw UP > > MBSTANDBY LOGS > > Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Resource Group Manager > Starting > Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Loading Service Data > Jun 25 11:13:27 mbstandby clurgmgrd[15850]: Initializing Services > Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdl1 is not mounted > Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdp1 is not mounted > Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdk1 is not mounted > Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdn1 is not mounted > Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdj1 is not mounted > Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdo1 is not mounted > Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdi1 is not mounted > Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdm1 is not mounted > Jun 25 11:13:47 mbstandby sshd(pam_unix)[17583]: session opened for > user root by (uid=0) > Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdd1 is not mounted > Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdh1 is not mounted > Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdg1 is not mounted > Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdc1 is not mounted > Jun 25 11:13:56 mbstandby rgmanager: clurgmgrd startup failed > Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session opened for user > zimbra by (uid=0) > Jun 25 11:13:56 mbstandby zimbra: -bash: /opt/zimbra/log/startup.log: > No such file or directory > Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session closed for > user zimbra > Jun 25 11:13:56 mbstandby rc: Starting zimbra: failed > Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdf1 is not mounted > Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdb1 is not mounted > Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sde1 is not mounted > Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sda1 is not mounted > Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Services Initialized > Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Logged in SG > "usrm::manager" > Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Magma Event: Membership > Change > Jun 25 11:14:09 mbstandby clurgmgrd[15850]: State change: Local UP > Jun 25 11:14:12 mbstandby clurgmgrd[15850]: Magma Event: Membership > Change > Jun 25 11:14:12 mbstandby clurgmgrd[15850]: State change: > mb1.ku.edu.kw UP > Jun 25 11:14:13 mbstandby clurgmgrd[15850]: Resource groups locked; > not evaluating > Jun 25 11:14:14 mbstandby clurgmgrd[15850]: Magma Event: Membership > Change > Jun 25 11:14:14 mbstandby clurgmgrd[15850]: State change: > mb2.ku.edu.kw UP > Jun 25 11:49:22 mbstandby sshd(pam_unix)[9438]: session opened for > user root by (uid=0) > > I am using e2label to mount on failover as well as primary server. > Attached also is my cluster.conf. > > Right now fencing is not being used properly just using manual and was > doing tetsing with HP ILO fencing. > > !st query i have is why does it show "Magma Event: Membership > Change" ? > > Since i have initially defined 3 members in cluster , it should not > give me this . Is it because of some package missing or i have to run > up2date ? > > I have installed following packages : - > > ccs-1.0.11-1.x86_64.rpm > cman-kernheaders-2.6.9-53.5.x86_64.rpm gulm-1.0.10-0.x86_64.rpm > magma-plugins-1.0.12-0.x86_64.rpm > ccs-devel-1.0.11-1.x86_64.rpm dlm-1.0.7-1.x86_64.rpm > gulm-devel-1.0.10-0.x86_64.rpm > perl-Net-Telnet-3.03-3.noarch.rpm > cman-1.0.17-0.x86_64.rpm dlm-devel-1.0.7-1.x86_64.rpm > iddev-2.0.0-4.x86_64.rpm rgmanager-1.9.72-1.x86_64.rpm > cman-devel-1.0.17-0.x86_64.rpm > dlm-kernel-2.6.9-52.2.x86_64.rpm iddev-devel-2.0.0-4.x86_64.rpm > system-config-cluster-1.0.51-2.0.noarch.rpm > cman-kernel-2.6.9-53.5.x86_64.rpm > dlm-kernel-smp-2.6.9-52.2.x86_64.rpm luci-0.11.0-3.x86_64.rpm > cman-kernel-smp-2.6.9-53.5.x86_64.rpm fence-1.32.50-2.x86_64.rpm > magma-1.0.8-1.x86_64.rpm > > Should i be missing any other important package for cluster ? I > installed packages using rpm -ivh *.rpm . > Also i stopped lock_glumd service as i am using lock_dlm lock manager. > > Later i tried using just IP in service part w/o mount points and > application service. Then also on reboot it doesnt startup.Here are > the logs :- > > Jun 27 19:44:37 mb1 clurgmgrd[12737]: Resource Group > Manager Starting > Jun 27 19:44:37 mb1 clurgmgrd[12737]: Loading Service Data > Jun 27 19:44:37 mb1 fstab-sync[12738]: removed all generated mount > points > Jun 27 19:44:38 mb1 clurgmgrd[12737]: Initializing Services > Jun 27 19:44:38 mb1 clurgmgrd[12737]: Services Initialized > Jun 27 19:44:38 mb1 clurgmgrd[12737]: Logged in SG > "usrm::manager" > Jun 27 19:44:38 mb1 clurgmgrd[12737]: Magma Event: Membership > Change > Jun 27 19:44:38 mb1 clurgmgrd[12737]: State change: Local UP > Jun 27 19:44:38 mb1 rgmanager: clurgmgrd startup succeeded > Jun 27 19:44:41 mb1 clurgmgrd[12737]: Magma Event: Membership > Change > Jun 27 19:44:41 mb1 clurgmgrd[12737]: State change: > mbstandby.ku.edu.kw UP > Jun 27 19:44:43 mb1 clurgmgrd[12737]: Magma Event: Membership > Change > Jun 27 19:44:43 mb1 clurgmgrd[12737]: State change: > mb2.ku.edu.kw UP > > Attached is also cluster.conf for this > > Please guide what could be the issue. Thanks in advance. > > Regards, > -Steven > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From lhh at redhat.com Wed Jul 2 18:29:08 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:29:08 -0400 Subject: [Linux-cluster] IP resource behavior In-Reply-To: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com> References: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com> Message-ID: <1215023348.23062.6.camel@localhost.localdomain> On Tue, 2008-07-01 at 12:57 -0600, Jeremy Lyon wrote: > Hi, > > We noticed today that if we manually remove an IP via ip a del /32 > dev bond0 that the service does not detect this and does not cause a > fail over. Shouldn't the service be statusing the IP resource to make > sure it is configured and up? We do have the monitor link option > enabled. This is cluster 2 on RHEL 5.1 Yes, it should have detected it. However, there's a bug in the stable2 branch which could cause it to fail in your case, particularly if your IP ends in say .25 -- Lon From lhh at redhat.com Wed Jul 2 18:34:32 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:34:32 -0400 Subject: [Linux-cluster] takeover, fencing & failback In-Reply-To: <486A0DE1.6010609@gmx.at> References: <486A0DE1.6010609@gmx.at> Message-ID: <1215023672.23062.10.camel@localhost.localdomain> On Tue, 2008-07-01 at 12:58 +0200, Gerhard Spiegl wrote: > Hi all, > > I'm working on a two node cluster (RHEL 5.2 + RHCS) with one > XEN virtual machine per node: > > node1 => VM1 > node2 => VM2 > > When node1 takes over VM2 via the command: > > clusvcadm -M vm:VM2 -m node1 > > node2 gets fenced after takeover is done, which is probably expected behaviour. No, it's not. > As node2 comes up again it fetches his VM2 back (nofailback="0", but also > fences node1 (ipmilan) where VM1 is still running an therefore interrupted and > restartet on node2. Neither is this. Fetching the VM back certainly should require fencing... > When node1 comes up the same game in the other direction begins. > Is there a way to avoid this fence loop? > In other words: can a service be migrated from node1 to node2 without other > services that run on node1 being interrupted? We'll need more details in order to figure out what's going on; such as cluster.conf and your network topology (switch make/model, what speed are your network links, etc) -- Lon From lhh at redhat.com Wed Jul 2 18:36:28 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:36:28 -0400 Subject: [Linux-cluster] CS5 / IP failover with bond interface ? In-Reply-To: <486354AB.4050307@bull.net> References: <486354AB.4050307@bull.net> Message-ID: <1215023788.23062.13.camel@localhost.localdomain> On Thu, 2008-06-26 at 10:34 +0200, Alain Moulle wrote: > Hi > > Is it supported to use IP bonded adress as IP to > be failovered via the CS5 ? It should be, but you must have a bonded address configured first - we do not manage setting up/taking down bonded interfaces. Rgmanager should assign IP addresses to "bondX" when appropriate. -- Lon From lhh at redhat.com Wed Jul 2 18:40:50 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:40:50 -0400 Subject: [Linux-cluster] Re: CS5 / quorum disk and heuristics / about allow_kill and/or reboot In-Reply-To: <4863559E.9030200@bull.net> References: <4863559E.9030200@bull.net> Message-ID: <1215024050.23062.17.camel@localhost.localdomain> On Thu, 2008-06-26 at 10:38 +0200, Alain Moulle wrote: > Hi Lon > > and so ... ? ;-) Right. Heartbeat fails + allow_kill = 0 -> qdiskd doesn't help prevent fence race. reboot = 0 shouldn't matter because the the node which has a correct heuristic score will win. -- Lon > > Regards > Alain Moull? > > > Date: Tue, 10 Jun 2008 14:37:19 -0400 > From: Lon Hohberger > >>Hi Lon, > >>> Whereas heart-beat interface was working fine. > >>> You can disable these by setting allow_kill="0" and/or reboot="0" > >>> (see qdisk(5)). > >> > >> > >> => ok but in the case of a heart-beat failure, it will no more > >> avoid the dual-fencing in a two-nodes cluster if allow_kill="0" and/or > reboot="0" , right ? > > >I'd have to think about it. > >Lon > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From lhh at redhat.com Wed Jul 2 18:42:12 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:42:12 -0400 Subject: [Linux-cluster] virtual machine failover with gfs In-Reply-To: References: Message-ID: <1215024132.23062.20.camel@localhost.localdomain> On Wed, 2008-06-25 at 15:51 -0700, matt whiteley wrote: > I have spent lots of hours trying different setups and reading the > documentation already so I hope this isn't a faq as I am new to the > list. > > I read the Red Hat Magazine article on this topic[1], but have come to > realize that it might not be exactly what I am going for. I want to > have a group of nodes that run a group of virtual machines with > automated failover. I set things up how the article described but > realized I didn't want the gfs mount in the fstab file. I would like > the gfs mount described in the cluster.conf file so that as nodes are > added or removed the mount will follow the changes (I know about the 1 > journal per node so have created a few extra already). When I add a > service to mount the gfs resource, it only gets mounted on one node as > is to be expected thinking in terms of other resources. > I started thinking about this and it almost seems like gfs is > unnecessary. Should I have a file system per virtual machine that > wouldn't need to be gfs since only one node will ever run a virtual > machine at a time? Then mount/umount the file system as the virtual > machine was migrated in the cluster? If you assign a raw SAN Lun to each virtual machine, you don't need GFS. I would not bother making an EXT3 or other local file system and placing a single VM image on it; it's not terribly practical. -- Lon > > It seems like I am missing something about how this should be setup > and I would really appreciate any tips or ideas. I will include my > cluster.conf in case it provides any more info. > > As a side note, what is with all the errors from system-config- > kickstart telling me my config file is invalid if it was generated by > conga. Both versions are updated to the newest available. > > > > [1] http://www.redhatmagazine.com/2007/08/23/automated-failover-and-recovery-of-virtualized-guests-in-advanced-platform/ > > thanks, > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From lhh at redhat.com Wed Jul 2 18:43:47 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:43:47 -0400 Subject: [Linux-cluster] Lost token - every 5 minutes: [TOTEM] The token was lost. Samba process possible cause? In-Reply-To: <6008E5CED89FD44A86D3C376519E1DB2010347BB42@megatron.ms.a2end.com> References: <6008E5CED89FD44A86D3C376519E1DB2010347BB42@megatron.ms.a2end.com> Message-ID: <1215024227.23062.23.camel@localhost.localdomain> Hi, This sounds like something that someone on the openais would know. I've CC'd the openais list. -- Lon On Fri, 2008-06-27 at 16:03 +1000, Bevan Broun wrote: > Hi All > > I have a 2 node RHEL-5.1 cluster. A quorum disk is configured. > The hosts have 4 NICs. These are bonded: > (eth0+eth2) -> bond0 > (eth1+eth3) -> bond1 > Unfortunately I was not able to use a dedicated interface for cluster communications - bond1 is being used. This is where I think Im in trouble. > > The cluster has been configured using IP addressess. I did have to use http://archives.free.net.ph/message/20080130.074958.5c7a211c.en.html > as the hostname is related to the bond0 IP. > > I have not defined the interface to be used by the cluster, just relying on the IP address configured. > The cluster's purpose is 2 GFS file systems. > > The cluster was configured and working for 4 days before there was problems. > > I now have almost constant lost of token message in /var/log/message. They are almost exactly 5 minutes apart. A typical bit of messages file is show below my sig. > > Just before the problem started a samba message shows nmdb becomming local master browser for a work group on the interface used for cluster communications. > > Jun 20 13:39:27 HOST1 nmbd[24506]: [2008/06/20 13:39:27, 0] nmbd/nmbd_become_lmb.c:become_loca > l_master_stage2(396) > Jun 20 13:39:27 HOST1 nmbd[24506]: ***** > Jun 20 13:39:27 HOST1 nmbd[24506]: > Jun 20 13:39:27 HOST1 nmbd[24506]: Samba name server NBM1 is now a local master browser for > workgroup SMS_DOMAIN on subnet 162.16.96.229 > Jun 20 13:39:27 HOST1 nmbd[24506]: > Jun 20 13:39:27 HOST1 nmbd[24506]: ***** > Jun 20 13:43:27 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state. > > "cman_tool status" shows both nodes and looks normal. Looks like clmvd is not happy, df commands are hanging. > > Could nmdb be causing this token loss? Any ideas on how to proceed? > > (names and IPs have been changed). > > Thanks > > Bevan Broun > Solutions Architect > Ardec International > http://www.ardec.com.au > http://www.lisasoft.com > http://www.terrapages.com > Sydney > ----------------------- > Suite 112,The Lower Deck > 19-21 Jones Bay Wharf > Pirrama Road, Pyrmont 2009 > Ph: +61 2 8570 5000 > Fax: +61 2 8570 5099 > > > > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state. > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Receive multicast socket recv buffer size (28800 > 0 bytes). > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Transmit multicast socket send buffer size (2621 > 42 bytes). > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] entering GATHER state from 2. > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep. > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Saving state aru 16 high seq received 16 > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce34 > Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] entering COMMIT state. > Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state. > Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4. > Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep. > Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce38 > Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] entering COMMIT state. > Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state. > Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4. > Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep. > Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce3c > Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] entering COMMIT state. > Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state. > Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4. > Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep. > Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce40 > Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] entering COMMIT state. > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] entering RECOVERY state. > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] position [0] member 162.16.96.229: > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] previous ring seq 2149936 rep 162.16.96.229 > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] aru 16 high delivered 16 received flag 1 > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] position [1] member 162.16.96.230: > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] previous ring seq 2149936 rep 162.16.96.229 > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] aru 16 high delivered 16 received flag 1 > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] Did not need to originate any messages in recove > ry. > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] Sending initial ORF token > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] CLM CONFIGURATION CHANGE > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] New Configuration: > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.229) > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.230) > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Left: > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Joined: > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] CLM CONFIGURATION CHANGE > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] New Configuration: > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.229) > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.230) > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Left: > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Joined: > Jun 20 13:49:06 HOST1 openais[15265]: [SYNC ] This node is within the primary component and wi > ll provide service. > Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] entering OPERATIONAL state. > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] got nodejoin message 162.16.96.229 > Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] got nodejoin message 162.16.96.230 > Jun 20 13:49:06 HOST1 openais[15265]: [CPG ] got joinlist message from node 2 > Jun 20 13:49:06 HOST1 openais[15265]: [CPG ] got joinlist message from node 1 > Jun 20 13:53:38 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state. > > The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From lhh at redhat.com Wed Jul 2 18:48:22 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 02 Jul 2008 14:48:22 -0400 Subject: [Linux-cluster] Availability and Working of Service In-Reply-To: <55459.36160.qm@web45809.mail.sp1.yahoo.com> References: <55459.36160.qm@web45809.mail.sp1.yahoo.com> Message-ID: <1215024502.23062.29.camel@localhost.localdomain> On Fri, 2008-06-27 at 03:01 -0700, Mshehzad Pankhawala wrote: > Hello Every one, > > I am planning to configure Asterisk Cluster using some of the > clustering technologies (LVS or OpenSER or Heartbeat or any other > thing). > > My problem is that Heartbeat and other component just check the > availability of the Server which is to be clustered. But I also want > the Service Asterisk should also be checked like Server is Answering > the call properly, All the functionality of Asterisk Server is working > properly or other services such as voice mail server (which is used by > Asterisk Server) is running properly. > > Any body can guide me how to do that, is there any components, tools > available, or Any Asterisk Specific tool to check Asterisk services > etc. then please reply. Cluster resource managers (one is included as part of heartbeat) can certainly perform any check you can write a script for. I would expect you start Asterisk with a script; e.g.: /etc/init.d/asterisk start That script probably also has a stop and a status action: ? /etc/init.d/asterisk stop ? /etc/init.d/asterisk status The 'status' action can be used by heartbeat / rgmanager / etc. to check the health of the asterisk server. -- Lon From ssingh at amnh.org Wed Jul 2 18:58:12 2008 From: ssingh at amnh.org (Sajesh Singh) Date: Wed, 02 Jul 2008 14:58:12 -0400 Subject: [Linux-cluster] Multipathing, CLVM and GFS Message-ID: <486BCFC4.9030203@amnh.org> Centos 4.6 Cluster Suite I am currently running a 2 node GFS cluster. The storage is provided via a fiber channel connection to the SAN. Each node currently has a single FC connection to the SAN. I would like to migrate to using dm-multipath with each node having dual fiber channel connections to the SAN. Can I assume that CLVM is aware of the /dev/dm-# devices that are used to access the multipathed devices? Are there any gotchas that are associated with installing the device-mapper-multipath software after the GFS cluster is up and running? Are there any howtos available for this type of setup? Regards and TIA, Sajesh Singh From dirk.schulz at kinzesberg.de Wed Jul 2 18:55:17 2008 From: dirk.schulz at kinzesberg.de (Dirk H. Schulz) Date: Wed, 02 Jul 2008 20:55:17 +0200 Subject: [Linux-cluster] Crashing machines with luci and ricci Message-ID: Hi folks, I have tried setting up a cluster with ricci and luci. I have done the following: - set up 2 cluster nodes - current patch level applied (5.2) - installed ricci on the nodes and luci on a management station - used luci web interface to setup the cluster After initial setup luci stated that one node could not be reached or had ricci not running. Both nodes were set up identical and could be reached fine. ricci was running on both machines. So I used the "restart the cluster" button - and that crashed both nodes within 10 minutes. One machine was unreachable nearly at once, the other had 100 % CPU load for several minutes before going down. Now so far there is not much I could have done wrong (at least not according to documentation). So I would like to know: Is this normal? Is using ricci and luci a bad idea because they simply do not work? Or the other way round: Are folks out there using these tools with positive results - and are there nuts and bolts I could have avoided? Any hint or help is appreciated. Dirk From ccaulfie at redhat.com Thu Jul 3 07:29:51 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Thu, 03 Jul 2008 08:29:51 +0100 Subject: [Linux-cluster] Multipathing, CLVM and GFS In-Reply-To: <486BCFC4.9030203@amnh.org> References: <486BCFC4.9030203@amnh.org> Message-ID: <486C7FEF.8070300@redhat.com> Sajesh Singh wrote: > Centos 4.6 > Cluster Suite > > I am currently running a 2 node GFS cluster. The storage is provided via > a fiber channel connection to the SAN. Each node currently has a single > FC connection to the SAN. I would like to migrate to using dm-multipath > with each node having dual fiber channel connections to the SAN. Can I > assume that CLVM is aware of the /dev/dm-# devices that are used to > access the multipathed devices? Are there any gotchas that are > associated with installing the device-mapper-multipath software after > the GFS cluster is up and running? Are there any howtos available for > this type of setup? > clvmd works fine with dm-multipath devices. You will probably have to edit /etc/lvm/lvm.conf to exclude the underlying /dev/sd devices to stop it getting confused though. You won't be able to do this with GFS mounted on the local node though, you'll have to umount it, setup dm-multipath, vgscan & remount. You CAN leave them mounted on other nodes while you do it. -- Chrissie From grimme at atix.de Thu Jul 3 08:01:14 2008 From: grimme at atix.de (Marc Grimme) Date: Thu, 3 Jul 2008 09:01:14 +0100 Subject: [Linux-cluster] Last and final official release candidate of the com.oonics open shared root cluster installation DVD is available (RC4) Message-ID: <200807031001.14953.grimme@atix.de> Hello, we are very happy to announce the availability of the last and final official release candidate of the com.oonics open shared root cluster installation DVD (RC4). The com.oonics open shared root cluster installation DVD allows the installation of a single node open shared root cluster with the use of anaconda, the well known installation software provided by Red Hat. After the installation, the open shared root cluster can be easily scaled up to more than hundred cluster nodes. You can now download the open shared root installation DVD from www.open-sharedroot.org. We are very interested in feetback. Please either file a bug or feature or post to the mailinglist (see www.open-sharedroot.org). More details can be found here: http://open-sharedroot.org/news-archive/availability-of-rc4-of-the-com-oonics-version-of-anaconda Note: The download isos are based on Centos5.1! RHEL5.1 versions will be provided on request. Have fun testing it and let us know the what you're thinking. -- Gruss / Regards, Marc Grimme http://www.atix.de/ http://www.open-sharedroot.org/ From garromo at us.ibm.com Thu Jul 3 12:42:37 2008 From: garromo at us.ibm.com (Gary Romo) Date: Thu, 3 Jul 2008 06:42:37 -0600 Subject: [Linux-cluster] Cluster server maintenance Message-ID: I have a two node cluster, RHEL 5, Protocol version: 5.0.1. Can anyone suggest the best method, and or explain -u and -q fo the clusvcadm command to me? Thanks! Here is what I want to do: 1. Shutdown the services running; DBs, apps whatever... 2. I don't want the services starting on the other node, or anywhere. 3. We don't want any fencing to take place 4. We do our maintenance; Patch server, whatever... 5. Bring the services back up; DBs, apps whatever... Now the only way I have found to do this so far is to disable the service. # clusvcadm -d (and maybe that is the only answer) Man pages do not provide much information -u Unlock the cluster's service managers. This allows services to transition again. It will be necessary to re-enable all services in the stopped state if this is run after clushutdown. Also a -q or quiet operation, which I am not finding any information about. # clusvcadm -h Resource Group Control Commands: clusvcadm -v Display version and exit clusvcadm -d Disable clusvcadm -e Enable on the local node clusvcadm -e -F Enable according to failover domain rules clusvcadm -e -m Enable on clusvcadm -r -m Relocate [to ] clusvcadm -q Quiet operation clusvcadm -R Restart a group in place. clusvcadm -s Stop Resource Group Locking (for cluster Shutdown / Debugging): clusvcadm -l Lock local resource group manager. This prevents resource groups from starting on the local node. clusvcadm -S Show lock state clusvcadm -u Unlock local resource group manager. This allows resource groups to start on the local node. Gary Romo IBM Global Technology Services 303.458.4415 Email: garromo at us.ibm.com Pager:1.877.552.9264 Text message: gromo at skytel.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Thu Jul 3 17:36:53 2008 From: lhh at redhat.com (Lon Hohberger) Date: Thu, 03 Jul 2008 13:36:53 -0400 Subject: [Linux-cluster] Cluster server maintenance In-Reply-To: References: Message-ID: <1215106613.23062.48.camel@localhost.localdomain> On Thu, 2008-07-03 at 06:42 -0600, Gary Romo wrote: > I have a two node cluster, RHEL 5, Protocol version: 5.0.1. Can anyone > suggest > the best method, and or explain -u and -q fo the clusvcadm command to > me? Thanks! > > Here is what I want to do: > > 1. Shutdown the services running; DBs, apps whatever... > 2. I don't want the services starting on the other node, or anywhere. > 3. We don't want any fencing to take place > 4. We do our maintenance; Patch server, whatever... > 5. Bring the services back up; DBs, apps whatever... > Now the only way I have found to do this so far is to disable the > service. > > # clusvcadm -d (and maybe that is the only answer) That's what it's for. Stopping a service (clusvcadm -s) will stop the service until the next member transition. Disabling (-d) a service stops it until either quorum is broken or all instances of rgmanager have been stopped. That is, as long as one instance of rgmanager is operating and the cluster is quorate, the service will remain disabled. ?Disabling autostart in Conga (or setting it to 0 in cluster.conf) for a given service) means "on startup, treat this service as disabled instead of stopped". Locking rgmanager prevents failover, and is useful in mass simultaneous shutdown operations, but less so for individual services. The manual page needs updating; '-l' only needs to be done once. [one node ] clusvcadm -l [all nodes] service rgmanager stop -q = "don't print stuff" -- Lon From theophanis_kontogiannis at yahoo.gr Sat Jul 5 15:46:16 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Sat, 5 Jul 2008 18:46:16 +0300 Subject: [Linux-cluster] Issue with clvmd - Is it really bug?? Message-ID: <01cb01c8deb6$44921e60$cdb65b20$@gr> Hello, I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2 with DRBD 2.6.18-92.1.6.el5.centos.plus drbd82-8.2.6-1.el5.centos lvm2-2.02.32-4.el5 lvm2-cluster-2.02.32-4.el5 system-config-lvm-1.1.3-2.0.el5 I do not know if my problem is directly related to http://kbase.redhat.com/faq/FAQ_51_10471.shtm and https://bugzilla.redhat.com/show_bug.cgi?id=138396 I do: pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1 vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1 lvcreate -v -L 348G -n data0 vg0 Then I reboot. The LV never becomes available. If I try vgchange -a y I get Error locking on node tweety-1: Volume group for uuid not found: 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa 0 logical volume(s) in volume group "vg0" now active If I do clvmd -R Then with vgchange -a y vg0. the LV becomes available. Is this really related to the above mentioned bug? How can I make the LV become available during boot up without any intervention? Thank you all for your time, Theophanis Kontogiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From magawake at gmail.com Sun Jul 6 15:44:00 2008 From: magawake at gmail.com (Mag Gam) Date: Sun, 6 Jul 2008 11:44:00 -0400 Subject: [Linux-cluster] GUI for cluster.conf Message-ID: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> Can someone recommend a GUI to configure cluster.conf for me? TIA -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at brimer.org Sun Jul 6 15:47:17 2008 From: lists at brimer.org (Barry Brimer) Date: Sun, 6 Jul 2008 10:47:17 -0500 (CDT) Subject: [Linux-cluster] GUI for cluster.conf In-Reply-To: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> Message-ID: > Can someone recommend a GUI to configure cluster.conf for me? system-config-cluster From td3201 at gmail.com Sun Jul 6 17:35:55 2008 From: td3201 at gmail.com (Terry) Date: Sun, 6 Jul 2008 12:35:55 -0500 Subject: [Linux-cluster] GUI for cluster.conf In-Reply-To: References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> Message-ID: <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com> On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote: >> Can someone recommend a GUI to configure cluster.conf for me? > > system-config-cluster > For what it's worth, I tried both system-config-cluster and Conga and found old fashioned command line tools to be more convenient. Granted, their organization and naming conventions need some work but after you use them a little while, you'll memorize them. Also, I leaned heavily upon google to find all the configuration options as I couldn't find much in the man pages. From magawake at gmail.com Sun Jul 6 17:55:22 2008 From: magawake at gmail.com (Mag Gam) Date: Sun, 6 Jul 2008 13:55:22 -0400 Subject: [Linux-cluster] GUI for cluster.conf In-Reply-To: <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com> References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com> Message-ID: <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com> Thanks On Sun, Jul 6, 2008 at 1:35 PM, Terry wrote: > On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote: > >> Can someone recommend a GUI to configure cluster.conf for me? > > > > system-config-cluster > > > > For what it's worth, I tried both system-config-cluster and Conga and > found old fashioned command line tools to be more convenient. > Granted, their organization and naming conventions need some work but > after you use them a little while, you'll memorize them. Also, I > leaned heavily upon google to find all the configuration options as I > couldn't find much in the man pages. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magawake at gmail.com Sun Jul 6 18:00:10 2008 From: magawake at gmail.com (Mag Gam) Date: Sun, 6 Jul 2008 14:00:10 -0400 Subject: [Linux-cluster] qdiskd question Message-ID: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> I have a 8 node cluster with shared Hitachi SAN disk. On each disk I created a 20M partition for qdisk , but only on 1 disk I created a qdisk. mkqdisk -c /dev/sda -l css Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd, etc..) or would I be find with only one disk? Also, I suppose I need to make changes to cluster.conf after I do this, correct? TIA From bfields at fieldses.org Sun Jul 6 21:51:05 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Sun, 6 Jul 2008 17:51:05 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080627184117.GE19105@redhat.com> References: <20080625224544.GJ12629@fieldses.org> <20080626152733.GC21081@redhat.com> <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> Message-ID: <20080706215105.GA28037@fieldses.org> On Fri, Jun 27, 2008 at 01:41:17PM -0500, David Teigland wrote: > On Fri, Jun 27, 2008 at 01:28:56PM -0400, david m. richter wrote: > > i also have another setup in vmware; while i doubt it's > > substantively different than bruce's, i'm a ready and willing tester. is > > there a different branch (or repo, or just a stack of patches somewhere) > > that i should/could be using? > > If on 2.6.25, then use > > ftp://ftp%40openais%2Eorg:downloads at openais.org/downloads/openais-0.80.3/openais-0.80.3.tar.gz > ftp://sources.redhat.com/pub/cluster/releases/cluster-2.03.04.tar.gz > > If on 2.6.26-rc, then you'll need to add the attached patch to cluster. I tried that patch against STABLE2, and needed the following to get it to compile. diff --git a/group/gfs_controld/plock.c b/group/gfs_controld/plock.c index 5e4f56b..f04a6b8 100644 --- a/group/gfs_controld/plock.c +++ b/group/gfs_controld/plock.c @@ -790,7 +790,7 @@ static void write_result(struct mountgroup *mg, struct dlm_plock_info *in, in->fsid = mg->associated_ls_id; in->rv = rv; - write(control_fd, in, sizeof(struct gdlm_plock_info)); + write(control_fd, in, sizeof(struct dlm_plock_info)); } static void do_waiters(struct mountgroup *mg, struct resource *r) I built everything with debugging turned on. The second mount again hangs, with a lot of this in the logs: Jul 1 14:06:42 piglet2 kernel: dlm: connecting to 1 Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node Jul 1 14:08:35 piglet2 kernel: INFO: task mount.gfs2:6130 blocked for more than 120 seconds. Jul 1 14:08:35 piglet2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jul 1 14:08:35 piglet2 kernel: mount.gfs2 D c09f0244 1896 6130 6129 Jul 1 14:08:35 piglet2 kernel: ce920bc4 00000046 ce9d28e0 c09f0244 6f5e11cb 00000621 ce9d2b40 ce9d2b40 Jul 1 14:08:35 piglet2 kernel: 00000046 cf167db8 ce9d28e0 0077d2a4 00000000 6fd5e46f 00000621 ce9d28e0 Jul 1 14:08:35 piglet2 kernel: 00000003 ce9e7874 00000002 7fffffff ce920bec c063cdc5 7fffffff ce920be0 Jul 1 14:08:35 piglet2 kernel: Call Trace: Jul 1 14:08:35 piglet2 kernel: [] schedule_timeout+0x75/0xb0 Jul 1 14:08:35 piglet2 kernel: [] ? trace_hardirqs_on+0x9d/0x110 Jul 1 14:08:35 piglet2 kernel: [] wait_for_common+0x9e/0x110 Jul 1 14:08:35 piglet2 kernel: [] ? default_wake_function+0x0/0x10 Jul 1 14:08:35 piglet2 kernel: [] wait_for_completion+0x12/0x20 Jul 1 14:08:35 piglet2 kernel: [] dlm_new_lockspace+0x766/0x7f0 Jul 1 14:08:35 piglet2 kernel: [] gdlm_mount+0x304/0x430 Jul 1 14:08:35 piglet2 kernel: [] gfs2_mount_lockproto+0x13f/0x160 Jul 1 14:08:35 piglet2 kernel: [] fill_super+0x3d2/0x6e0 Jul 1 14:08:35 piglet2 kernel: [] ? gfs2_glock_cb+0x0/0x150 Jul 1 14:08:35 piglet2 kernel: [] ? disk_name+0x25/0x90 Jul 1 14:08:35 piglet2 kernel: [] get_sb_bdev+0xef/0x120 Jul 1 14:08:35 piglet2 kernel: [] ? alloc_vfsmnt+0xd5/0x110 Jul 1 14:08:35 piglet2 kernel: [] gfs2_get_sb+0x15/0x40 Jul 1 14:08:35 piglet2 kernel: [] ? fill_super+0x0/0x6e0 Jul 1 14:08:35 piglet2 kernel: [] vfs_kern_mount+0x53/0x120 Jul 1 14:08:35 piglet2 kernel: [] do_kern_mount+0x31/0xc0 Jul 1 14:08:35 piglet2 kernel: [] do_new_mount+0x56/0x80 Jul 1 14:08:35 piglet2 kernel: [] do_mount+0x1c6/0x1f0 Jul 1 14:08:35 piglet2 kernel: [] ? cache_alloc_debugcheck_after+0x71/0x1a0 Jul 1 14:08:35 piglet2 kernel: [] ? __get_free_pages+0x1b/0x30 Jul 1 14:08:35 piglet2 kernel: [] ? copy_mount_options+0x2a/0x130 Jul 1 14:08:35 piglet2 kernel: [] sys_mount+0x6a/0xb0 Jul 1 14:08:35 piglet2 kernel: [] syscall_call+0x7/0xb Jul 1 14:08:35 piglet2 kernel: ======================= Jul 1 14:08:35 piglet2 kernel: 4 locks held by mount.gfs2/6130: Jul 1 14:08:35 piglet2 kernel: #0: (&type->s_umount_key#20){--..}, at: [] sget+0x176/0x360 Jul 1 14:08:35 piglet2 kernel: #1: (lmh_lock){--..}, at: [] gfs2_mount_lockproto+0x20/0x160 Jul 1 14:08:35 piglet2 kernel: #2: (&ls_lock){--..}, at: [] dlm_new_lockspace+0x1e/0x7f0 Jul 1 14:08:35 piglet2 kernel: #3: (&ls->ls_in_recovery){--..}, at: [] dlm_new_lockspace+0x5cf/0x7f0 Jul 1 14:10:44 piglet2 kernel: INFO: task mount.gfs2:6130 blocked for more than 120 seconds. Jul 1 14:10:44 piglet2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jul 1 14:10:44 piglet2 kernel: mount.gfs2 D c09f0244 1896 6130 6129 So I gave up on this and tried going back to v2.6.25, and the suggested cluster-2.03.04, but the second mounts still hang, and a sysrq-T trace shows the mount system call hanging in dlm_new_workspace(). Since this I guess is a known-working set of software versions, I'm assuming there's something wrong with my setup.... It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is in "D" state in dlm_rcom_status(), so I guess the second node isn't getting some dlm reply it expects? --b. From pastany at gmail.com Mon Jul 7 02:47:44 2008 From: pastany at gmail.com (pastany) Date: Mon, 7 Jul 2008 10:47:44 +0800 Subject: [Linux-cluster] gfs-6.1.5 problem Message-ID: <200807071047416256159@gmail.com> Hi everyone after a power off, we cant mount our gfs partition after gfs_fsck,it still not working. here is the gfs_fsck output gfs_fsck -vv /dev/mapper/vod-lv_vod Initializing fsck Initializing lists... (bio.c:140) Writing to 65536 - 16 4096 Initializing special inodes... (file.c:45) readi: Offset (640) is >= the file size (640). (super.c:208) 8 journals found. (file.c:45) readi: Offset (1210752) is >= the file size (1210752). (super.c:265) 12612 resource groups found. (util.c:112) For 238021862 Expected 1161970:3 - got 6617DE2F:9BC483A0 Buffer #238021862 (3 of 5) is neither GFS_METATYPE_RB nor GFS_METATYPE_RG. Resource group is corrupted. Unable to read in rgrp descriptor. Unable to fill in resource group information. (initialize.c:388) - init_sbp() any help is appreciated pastany 2008-07-07 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccaulfie at redhat.com Mon Jul 7 07:23:58 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 07 Jul 2008 08:23:58 +0100 Subject: [Linux-cluster] Issue with clvmd - Is it really bug?? In-Reply-To: <01cb01c8deb6$44921e60$cdb65b20$@gr> References: <01cb01c8deb6$44921e60$cdb65b20$@gr> Message-ID: <4871C48E.3030004@redhat.com> Theophanis Kontogiannis wrote: > Hello, > > > > I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2 > with DRBD > > > > 2.6.18-92.1.6.el5.centos.plus > > drbd82-8.2.6-1.el5.centos > > lvm2-2.02.32-4.el5 > > lvm2-cluster-2.02.32-4.el5 > > system-config-lvm-1.1.3-2.0.el5 > > > > I do not know if my problem is directly related to > http://kbase.redhat.com/faq/FAQ_51_10471.shtm and > https://bugzilla.redhat.com/show_bug.cgi?id=138396 > > > > I do: > > > > pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1 > > vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1 > > lvcreate -v -L 348G -n data0 vg0 > > > > Then I reboot. > > The LV never becomes available. > > > > If I try > > > > vgchange -a y > > > > I get > > > > Error locking on node tweety-1: Volume group for uuid not found: > 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa > > 0 logical volume(s) in volume group "vg0" now active > > > > If I do > > > > clvmd ?R > > > > Then with > > > > vgchange ?a y vg0. > > > > the LV becomes available. > > > > Is this really related to the above mentioned bug? > > > > How can I make the LV become available during boot up without any > intervention? > > > > Thank you all for your time, As you're using drbd for the PV, I think it might be to do startup ordering. If drbd is started AFTER clvmd then it won't see the devices, and you'll get exactly the symptoms you describe. if you can, move drbd to before clvmd, or clvmd after drbd. Or, failing that, put the extra commands you used above into their own startup script. -- Chrissie From theophanis_kontogiannis at yahoo.gr Mon Jul 7 09:05:23 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Mon, 7 Jul 2008 12:05:23 +0300 Subject: [Linux-cluster] Issue with clvmd - Is it really bug?? In-Reply-To: <4871C48E.3030004@redhat.com> References: <01cb01c8deb6$44921e60$cdb65b20$@gr> <4871C48E.3030004@redhat.com> Message-ID: <020d01c8e010$98e74110$cab5c330$@gr> Hello Christine and All, This was exactly the problem. The sequence of services startup. In the past I had fixed this. However and because the problems started after the update I did to the system, it never occurred to me that the problem might be the sequence of services startup. In fact I never looked in the /etc/rc3.d to take a look at it. So because I never thought about this possibility, and because the problems started after the system update were lvm2 / clvm was also updated, it stuck in my mind that the problem was due to the new version of the clvmd and lvm2. Thank you all for your time, Sincerely, Theophanis Kontogiannis -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Christine Caulfield Sent: Monday, July 07, 2008 10:24 AM To: linux clustering Subject: Re: [Linux-cluster] Issue with clvmd - Is it really bug?? Theophanis Kontogiannis wrote: > Hello, > > > > I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2 > with DRBD > > > > 2.6.18-92.1.6.el5.centos.plus > > drbd82-8.2.6-1.el5.centos > > lvm2-2.02.32-4.el5 > > lvm2-cluster-2.02.32-4.el5 > > system-config-lvm-1.1.3-2.0.el5 > > > > I do not know if my problem is directly related to > http://kbase.redhat.com/faq/FAQ_51_10471.shtm and > https://bugzilla.redhat.com/show_bug.cgi?id=138396 > > > > I do: > > > > pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1 > > vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1 > > lvcreate -v -L 348G -n data0 vg0 > > > > Then I reboot. > > The LV never becomes available. > > > > If I try > > > > vgchange -a y > > > > I get > > > > Error locking on node tweety-1: Volume group for uuid not found: > 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa > > 0 logical volume(s) in volume group "vg0" now active > > > > If I do > > > > clvmd ?R > > > > Then with > > > > vgchange ?a y vg0. > > > > the LV becomes available. > > > > Is this really related to the above mentioned bug? > > > > How can I make the LV become available during boot up without any > intervention? > > > > Thank you all for your time, As you're using drbd for the PV, I think it might be to do startup ordering. If drbd is started AFTER clvmd then it won't see the devices, and you'll get exactly the symptoms you describe. if you can, move drbd to before clvmd, or clvmd after drbd. Or, failing that, put the extra commands you used above into their own startup script. -- Chrissie -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From ozgurakan at gmail.com Mon Jul 7 10:45:53 2008 From: ozgurakan at gmail.com (Ozgur Akan) Date: Mon, 7 Jul 2008 06:45:53 -0400 Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2 Message-ID: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com> Hi, We keep "gfs_controld[3054]: plock result write err 0 errno 2" error message in message; Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 errno 2 Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 errno 2 Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0 errno 2 Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0 errno 2 Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0 errno 2 Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0 errno 2 It looks like happening every 15 minutes. Do you have any idea what this means and how can I prevent from happening? thanks, Ozgur Akan -------------- next part -------------- An HTML attachment was scrubbed... URL: From swhiteho at redhat.com Mon Jul 7 10:44:11 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Mon, 07 Jul 2008 11:44:11 +0100 Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2 In-Reply-To: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com> References: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com> Message-ID: <1215427451.4011.121.camel@quoit> Hi, Are there any other messages in the logs? Which kernel version are you using? Also do you think it might be similar to bz #454052? Steve. On Mon, 2008-07-07 at 06:45 -0400, Ozgur Akan wrote: > Hi, > > We keep "gfs_controld[3054]: plock result write err 0 errno 2" error > message > > in message; > > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0 > errno 2 > > > It looks like happening every 15 minutes. Do you have any idea what > this means and how can I prevent from happening? > > thanks, > Ozgur Akan > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From nkhare.lists at gmail.com Mon Jul 7 11:05:25 2008 From: nkhare.lists at gmail.com (Neependra Khare) Date: Mon, 07 Jul 2008 16:35:25 +0530 Subject: [Linux-cluster] qdiskd question In-Reply-To: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> Message-ID: <4871F875.2020802@gmail.com> Mag Gam wrote: > I have a 8 node cluster with shared Hitachi SAN disk. On each disk I > created a 20M partition for qdisk , but only on 1 disk I created a > qdisk. > mkqdisk -c /dev/sda -l css > > Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd, > etc..) or would I be find with only one disk? > Have you created 8 separate partition for each node. OR One shared partition which is accessible to all the nodes. Refer following for configuring quorum disks. http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorum http://www.redhatmagazine.com/2007/12/19/enhancing-cluster-quorum-with-qdisk/ > Also, I suppose I need to make changes to cluster.conf after I do this, correct? > Yes. Neependra. From vimal.jtech at gmail.com Mon Jul 7 12:27:45 2008 From: vimal.jtech at gmail.com (Vimal Gupta) Date: Mon, 7 Jul 2008 12:27:45 +0000 Subject: [Linux-cluster] GUI for cluster.conf In-Reply-To: <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com> References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com> <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com> Message-ID: <437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com> IF I am right , We also can use luci for that also . On 7/6/08, Mag Gam wrote: > > Thanks > > On Sun, Jul 6, 2008 at 1:35 PM, Terry wrote: > >> On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote: >> >> Can someone recommend a GUI to configure cluster.conf for me? >> > >> > system-config-cluster >> > >> >> For what it's worth, I tried both system-config-cluster and Conga and >> found old fashioned command line tools to be more convenient. >> Granted, their organization and naming conventions need some work but >> after you use them a little while, you'll memorize them. Also, I >> leaned heavily upon google to find all the configuration options as I >> couldn't find much in the man pages. >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From teigland at redhat.com Mon Jul 7 15:48:28 2008 From: teigland at redhat.com (David Teigland) Date: Mon, 7 Jul 2008 10:48:28 -0500 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080706215105.GA28037@fieldses.org> References: <20080625224544.GJ12629@fieldses.org> <20080626152733.GC21081@redhat.com> <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> Message-ID: <20080707154828.GB10404@redhat.com> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > - write(control_fd, in, sizeof(struct gdlm_plock_info)); > + write(control_fd, in, sizeof(struct dlm_plock_info)); Gah, sorry, I keep fixing that and it keeps reappearing. > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > in "D" state in dlm_rcom_status(), so I guess the second node isn't > getting some dlm reply it expects? dlm inter-node communication is not working here for some reason. There must be something unusual with the way the network is configured on the nodes, and/or a problem with the way the cluster code is applying the network config to the dlm. Ah, I just remembered what this sounds like; we see this kind of thing when a network interface has multiple IP addresses, and/or routing is configured strangely. Others cc'ed could offer better details on exactly what to look for. Dave From jparsons at redhat.com Mon Jul 7 15:54:07 2008 From: jparsons at redhat.com (jim parsons) Date: Mon, 07 Jul 2008 11:54:07 -0400 Subject: [Linux-cluster] GUI for cluster.conf In-Reply-To: <437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com> References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com> <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com> <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com> <437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com> Message-ID: <1215446047.3300.2.camel@localhost.localdomain> On Mon, 2008-07-07 at 12:27 +0000, Vimal Gupta wrote: > > IF I am right , We also can use luci for that also . Luci is the UI component of Conga. Command line tools are great - but when you wish to do something like restart all the cluster daemons on all of your nodes, using Conga can be handy. It saves having to shell around to all of your nodes and execute commands. jmho, -j > On 7/6/08, Mag Gam wrote: > Thanks > > On Sun, Jul 6, 2008 at 1:35 PM, Terry > wrote: > On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer > wrote: > >> Can someone recommend a GUI to configure > cluster.conf for me? > > > > system-config-cluster > > > > > For what it's worth, I tried both > system-config-cluster and Conga and > found old fashioned command line tools to be more > convenient. > Granted, their organization and naming conventions > need some work but > after you use them a little while, you'll memorize > them. Also, I > leaned heavily upon google to find all the > configuration options as I > couldn't find much in the man pages. > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From teigland at redhat.com Mon Jul 7 16:46:13 2008 From: teigland at redhat.com (David Teigland) Date: Mon, 7 Jul 2008 11:46:13 -0500 Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2 In-Reply-To: <1215427451.4011.121.camel@quoit> References: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com> <1215427451.4011.121.camel@quoit> Message-ID: <20080707164613.GD10404@redhat.com> On Mon, Jul 07, 2008 at 11:44:11AM +0100, Steven Whitehouse wrote: > Hi, > > Are there any other messages in the logs? Which kernel version are you > using? Also do you think it might be similar to bz #454052? https://bugzilla.redhat.com/show_bug.cgi?id=446128 > On Mon, 2008-07-07 at 06:45 -0400, Ozgur Akan wrote: > > Hi, > > > > We keep "gfs_controld[3054]: plock result write err 0 errno 2" error > > message > > > > in message; > > > > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0 > > errno 2 > > > > > > It looks like happening every 15 minutes. Do you have any idea what > > this means and how can I prevent from happening? > > > > thanks, > > Ozgur Akan From lhh at redhat.com Mon Jul 7 17:22:51 2008 From: lhh at redhat.com (Lon Hohberger) Date: Mon, 07 Jul 2008 13:22:51 -0400 Subject: [Linux-cluster] qdiskd question In-Reply-To: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> Message-ID: <1215451371.22549.77.camel@localhost.localdomain> On Sun, 2008-07-06 at 14:00 -0400, Mag Gam wrote: > I have a 8 node cluster with shared Hitachi SAN disk. On each disk I > created a 20M partition for qdisk , but only on 1 disk I created a > qdisk. > mkqdisk -c /dev/sda -l css > > Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd, > etc..) or would I be find with only one disk? > > Also, I suppose I need to make changes to cluster.conf after I do this, correct? It currently doesn't support >1 disk. -- Lon From mdmunazir at gmail.com Mon Jul 7 18:40:26 2008 From: mdmunazir at gmail.com (Mohammed Munazir Ul Hasan) Date: Mon, 7 Jul 2008 21:40:26 +0300 Subject: [Linux-cluster] qdiskd question In-Reply-To: <1215451371.22549.77.camel@localhost.localdomain> References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com> <1215451371.22549.77.camel@localhost.localdomain> Message-ID: Hi All Administrator and Users, Myself Mohammed Munazir working as a Linux Administrator in Saudi Arabia. I am Redhat Certified Engineer. My company is planning to go for RedHat Cluster for Webhosting Servers. We have LAMP Server Configure. As I am a fresher i never done RedHat Clustering. If anyone can help me regarding this. I need good document for Clustering and Storage Management. Good links. If you all experts help me i will be very thankful to you. Waiting for early and favorable reply from all of you. Thanking You Mohammed Munazir On 7/7/08, Lon Hohberger wrote: > > On Sun, 2008-07-06 at 14:00 -0400, Mag Gam wrote: > > I have a 8 node cluster with shared Hitachi SAN disk. On each disk I > > created a 20M partition for qdisk , but only on 1 disk I created a > > qdisk. > > mkqdisk -c /dev/sda -l css > > > > Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd, > > etc..) or would I be find with only one disk? > > > > Also, I suppose I need to make changes to cluster.conf after I do this, > correct? > > It currently doesn't support >1 disk. > > -- Lon > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfields at fieldses.org Mon Jul 7 18:49:28 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Mon, 7 Jul 2008 14:49:28 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080707154828.GB10404@redhat.com> References: <20080626152733.GC21081@redhat.com> <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> Message-ID: <20080707184928.GE14291@fieldses.org> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > > - write(control_fd, in, sizeof(struct gdlm_plock_info)); > > + write(control_fd, in, sizeof(struct dlm_plock_info)); > > Gah, sorry, I keep fixing that and it keeps reappearing. > > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > > in "D" state in dlm_rcom_status(), so I guess the second node isn't > > getting some dlm reply it expects? > > dlm inter-node communication is not working here for some reason. There > must be something unusual with the way the network is configured on the > nodes, and/or a problem with the way the cluster code is applying the > network config to the dlm. > > Ah, I just remembered what this sounds like; we see this kind of thing > when a network interface has multiple IP addresses, and/or routing is > configured strangely. Others cc'ed could offer better details on exactly > what to look for. OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on neither, and it's entirely likely there's some obvious misconfiguration. On the kvm host there are 4 virtual interfaces bridged together: bfields at pig:~$ brctl show bridge name bridge id STP enabled interfaces vnet0 8000.00ff0823c0f3 yes vnet1 vnet2 vnet3 vnet4 vnet0 has address 192.168.122.1 on the host, and the 4 kvm guests are statically assigned addresses 129, 130, 131, and 132 on the 192.168.122.* network, so a kvm guest looks like: piglet1:~# ifconfig eth1 Link encap:Ethernet HWaddr 00:16:3e:16:4d:61 inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe16:4d61/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2464 errors:0 dropped:0 overruns:0 frame:0 TX packets:1806 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:197099 (192.4 KiB) TX bytes:165606 (161.7 KiB) Interrupt:11 Base address:0xc100 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:285 errors:0 dropped:0 overruns:0 frame:0 TX packets:285 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13394 (13.0 KiB) TX bytes:13394 (13.0 KiB) piglet1:~# cat /etc/hosts 127.0.0.1 localhost 192.168.122.129 piglet1 192.168.122.130 piglet2 192.168.122.131 piglet3 192.168.122.132 piglet4 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts The network setup looks otherwise fine--they can all ping each other and the outside world. --b. From ozgurakan at gmail.com Mon Jul 7 21:18:07 2008 From: ozgurakan at gmail.com (Ozgur Akan) Date: Mon, 7 Jul 2008 17:18:07 -0400 Subject: [Linux-cluster] quota and noatime configurations Message-ID: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> Hi, Even I have ' options="noatime,quota=off" ' in my cluster.conf file, I see [gfs2_quotad] running and I can not see quota in mtab file /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id=196610:first=1 0 0 is this normal? thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From shawnlhood at gmail.com Mon Jul 7 21:22:41 2008 From: shawnlhood at gmail.com (Shawn Hood) Date: Mon, 7 Jul 2008 17:22:41 -0400 Subject: [Linux-cluster] quota and noatime configurations In-Reply-To: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> Message-ID: Have you tried noquota? 2008/7/7 Ozgur Akan : > Hi, > > Even I have ' options="noatime,quota=off" ' in my cluster.conf file, > I see [gfs2_quotad] running and > > I can not see quota in mtab file > /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id > =196610:first=1 0 0 > > is this normal? > > thanks, > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- -- Shawn Hood 910.670.1819 m From swhiteho at redhat.com Tue Jul 8 09:43:56 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Tue, 08 Jul 2008 10:43:56 +0100 Subject: [Linux-cluster] quota and noatime configurations In-Reply-To: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> Message-ID: <1215510236.3475.0.camel@localhost.localdomain> Hi, On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote: > Hi, > > Even I have ' options="noatime,quota=off" ' in my cluster.conf > file, > I see [gfs2_quotad] running and > > I can not see quota in mtab file > /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id > =196610:first=1 0 0 > > is this normal? > Yes, it will likely only appear if you turn it on since the default is off, Steve. > thanks, > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From kees at tweakers.net Tue Jul 8 12:25:31 2008 From: kees at tweakers.net (Kees Hoekzema) Date: Tue, 8 Jul 2008 14:25:31 +0200 Subject: [Linux-cluster] Freezing GFS mount in a cluster Message-ID: <004a01c8e0f5$b6d8ccd0$248a6670$@net> Hello List, Recently we bought an Dell MD3000 iSCSI storage system and we are trying to get GFS running on it. I have 3 test servers hooked up to the MD3000i and I have the cluster working, including multipath and different paths. When I had the cluster up with all 3 nodes in the fence domain and cman_tool status reporting 3 nodes I made a GFS partition and formatted it: # gfs_mkfs -j 10 -p lock_dlm -t tweakers:webdata /dev/mapper/webdata-part1 This worked and I could mount the filesystem on the server I made it on. However, as soon as I tried to mount it on one of the two other servers, I would get a freeze and get fenced. After a fresh reboot of the complete cluster I tried to mount it again. The first server could mount it, but any server that would try to mount it with the first server having the gfs mounted would crash. As I'm fairly new to cman/fencing/gfs-clusters, I was wondering if this is something 'silly' configuration error, or that there is something seriously wrong. Another thing I would like to know is where to get debug information. Right now there is not a lot debug information available, or at least I couldn't find it. One thing that particularly annoyed me was the ' Waiting for fenced to join the fence group.' message which didn't come with any explanation whatsoever. That message finally went away when I powered up the two other servers and started the cluster on all three simultaneously. Anyway, my cluster config for this testing. I use manual fencing for testing as the environment I test it in does not have exactly the same hardware as I have in the production environment. Conclusion: - why can't I mount GFS on another server, when it is mounted on one? - how do I get more debug information (ie: reason why a server can't join a fence domein. Or the reason why a server gets fenced). Thank you all for your time, Kees Hoekzema From andy at andrewprice.me.uk Tue Jul 8 17:12:56 2008 From: andy at andrewprice.me.uk (Andrew Price) Date: Tue, 08 Jul 2008 18:12:56 +0100 Subject: [Linux-cluster] Re: quota and noatime configurations In-Reply-To: <1215510236.3475.0.camel@localhost.localdomain> References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> <1215510236.3475.0.camel@localhost.localdomain> Message-ID: On 08/07/08 10:43, Steven Whitehouse wrote: > On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote: >> Hi, >> >> Even I have ' options="noatime,quota=off" ' in my cluster.conf >> file, >> I see [gfs2_quotad] running and >> >> I can not see quota in mtab file >> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id >> =196610:first=1 0 0 >> >> is this normal? >> > Yes, it will likely only appear if you turn it on since the default is > off, If I'm reading the code correctly, gfs2_quotad is always started regardless of the quota options. -- Andy Price From ozgurakan at gmail.com Tue Jul 8 17:20:30 2008 From: ozgurakan at gmail.com (Ozgur Akan) Date: Tue, 8 Jul 2008 13:20:30 -0400 Subject: [Linux-cluster] lock_dlm to lock_nolock Message-ID: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com> Hi, Can I mount a gfs filesystem formatted with lock_dlmlock and use it without a problem in the cluster if I have proper fencing and that fs is mounted to only one node at a time? mount -o lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/ [root at rhtest01 ~]# ./ping -rw /gfstwo/test 1 data increment = 1 140012 locks/sec [root at rhtest01 ~]# gfs2_tool df /gfstwo/ /gfstwo: SB lock proto = "lock_dlm" SB lock table = "testcluster:gfstwo" SB ondisk format = 1801 SB multihost format = 1900 Block size = 4096 Journals = 3 Resource Groups = 60 Mounted lock proto = "lock_nolock" Mounted lock table = "testcluster:gfstwo" Mounted host data = "" Journal number = 0 Lock module flags = 1 Local flocks = TRUE thanks, Ozgur Akan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozgurakan at gmail.com Tue Jul 8 17:31:25 2008 From: ozgurakan at gmail.com (Ozgur Akan) Date: Tue, 8 Jul 2008 13:31:25 -0400 Subject: [Linux-cluster] lock_dlm to lock_nolock Message-ID: <68f132770807081031p56400c11xd2792659537e5ef6@mail.gmail.com> (sorry if you get this email twice) Hi, Can I mount a gfs filesystem formatted with lock_dlmlock and use it without a problem in the cluster if I have proper fencing and that fs is mounted to only one node at a time? mount -o lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/ [root at rhtest01 ~]# ./ping -rw /gfstwo/test 1 data increment = 1 140012 locks/sec [root at rhtest01 ~]# gfs2_tool df /gfstwo/ /gfstwo: SB lock proto = "lock_dlm" SB lock table = "testcluster:gfstwo" SB ondisk format = 1801 SB multihost format = 1900 Block size = 4096 Journals = 3 Resource Groups = 60 Mounted lock proto = "lock_nolock" Mounted lock table = "testcluster:gfstwo" Mounted host data = "" Journal number = 0 Lock module flags = 1 Local flocks = TRUE thanks, Ozgur Akan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfields at fieldses.org Tue Jul 8 22:15:33 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Tue, 8 Jul 2008 18:15:33 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080707184928.GE14291@fieldses.org> References: <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> Message-ID: <20080708221533.GI15038@fieldses.org> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > > > - write(control_fd, in, sizeof(struct gdlm_plock_info)); > > > + write(control_fd, in, sizeof(struct dlm_plock_info)); > > > > Gah, sorry, I keep fixing that and it keeps reappearing. > > > > > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > > > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > > > in "D" state in dlm_rcom_status(), so I guess the second node isn't > > > getting some dlm reply it expects? > > > > dlm inter-node communication is not working here for some reason. There > > must be something unusual with the way the network is configured on the > > nodes, and/or a problem with the way the cluster code is applying the > > network config to the dlm. > > > > Ah, I just remembered what this sounds like; we see this kind of thing > > when a network interface has multiple IP addresses, and/or routing is > > configured strangely. Others cc'ed could offer better details on exactly > > what to look for. > > OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on > neither, and it's entirely likely there's some obvious misconfiguration. > On the kvm host there are 4 virtual interfaces bridged together: I ran wireshark on vnet0 while doing the second mount; what I saw was the second machine opened a tcp connection to port 21064 on the first (which had already completed the mount), and sent it a single message identified by wireshark as "DLM3" protocol, type recovery command: status command. It got back an ACK then a RST. Then the same happened in the other direction, with the first machine sending a similar message to port 21064 on the second, which then reset the connection. --b. > > bfields at pig:~$ brctl show > bridge name bridge id STP enabled interfaces > vnet0 8000.00ff0823c0f3 yes vnet1 > vnet2 > vnet3 > vnet4 > > vnet0 has address 192.168.122.1 on the host, and the 4 kvm guests are > statically assigned addresses 129, 130, 131, and 132 on the 192.168.122.* > network, so a kvm guest looks like: > > piglet1:~# ifconfig > eth1 Link encap:Ethernet HWaddr 00:16:3e:16:4d:61 > inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0 > inet6 addr: fe80::216:3eff:fe16:4d61/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:2464 errors:0 dropped:0 overruns:0 frame:0 > TX packets:1806 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:197099 (192.4 KiB) TX bytes:165606 (161.7 KiB) > Interrupt:11 Base address:0xc100 > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:285 errors:0 dropped:0 overruns:0 frame:0 > TX packets:285 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:13394 (13.0 KiB) TX bytes:13394 (13.0 KiB) > > piglet1:~# cat /etc/hosts > 127.0.0.1 localhost > 192.168.122.129 piglet1 > 192.168.122.130 piglet2 > 192.168.122.131 piglet3 > 192.168.122.132 piglet4 > > # The following lines are desirable for IPv6 capable hosts > ::1 ip6-localhost ip6-loopback > fe00::0 ip6-localnet > ff00::0 ip6-mcastprefix > ff02::1 ip6-allnodes > ff02::2 ip6-allrouters > ff02::3 ip6-allhosts > > The network setup looks otherwise fine--they can all ping each other and > the outside world. > > --b. From ajeet.singh.raina at logica.com Wed Jul 9 06:02:49 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 11:32:49 +0530 Subject: [Linux-cluster] Setting Up Two Node Cluster.. Message-ID: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com> Hello Guys, I am totally in clustering field and recently involved in a project related to Setting Up a Red Hat Cluster. I have two RHEL 4.0 Update 2 Servers which I have installed with the following packages each : ccs-1.0.6-0.x86_64.rpm cman-1.0.8-0.x86_64.rpm cman-kernel-smp-2.6.9-39.5.x86_64.rpm cman-kernel-smp-2.6.9-44.7.x86_64.rpm device-mapper-1.02.25-1.el4.x86_64.rpm dlm-1.0.1-1.x86_64.rpm dlm-kernel-smp-2.6.9-37.7.x86_64.rpm dlm-kernel-smp-2.6.9-39.1.x86_64.rpm dlm-kernel-smp-2.6.9-42.7.x86_64.rpm dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.21-7.el4.x86_64.rpm lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm magma-1.0.5-0.x86_64.rpm magma-plugins-1.0.8-0.x86_64.rpm rgmanager-1.9.50-0.x86_64.rpm system-config-cluster-1.0.27-1.0.noarch.rpm system-config-cluster-1[1].0.27-1.0.noarch.rpm perl-Crypt-SSLeay-0.51-5.x86_64.rpm On 10.14.236.106 I ran # system-config-cluster and I added the two Node - One itself(10.14.236.106) and the other(10.14.236.108). I added The ILO as my Fencing Device providing the right credentials.I dint added any Resource and Service as I just want to test whether the two amchines sees wach other or not. I saved the file and it gave me cluster.conf. Next I ran #service ccsd start #service cman start That Brought out Cluster Management Option next to Cluster Configuration label. I transported the cluster.conf manually through scp to the next machine. Now I too ran the ccsd and cman on the other machine. Then I ran #service fenced start #service rgmanager start One by one to the two machine. When I ran the command: Member Status: Quorate Member Name Status ------ ---- ------ BL02DL385 Online, rgmanager BL01DL385 Online, Local, rgmanager [root at BL01DL385 ~]# So My Nodes are seeing each other.Upto this Its Fine. Now I have one script called tester.sh placed in 106 machine and All I am adding it to Script Section under Resource giving the full path. Now Again I am restarting the service in order. Now The Cluster.conf file is same in both the system Say,if I reboot the 106 system, Will the next Server show running the script????? Please Advise. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Wed Jul 9 06:04:32 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 11:34:32 +0530 Subject: [Linux-cluster] RE: Setting Up Two Node Cluster.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB20808C170@in-ex004.groupinfra.com> FYI I have no Shared Storage. Is it needed in this scenario? What Could be the right alternative? ________________________________ From: Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 11:33 AM To: 'linux-cluster at redhat.com' Subject: Setting Up Two Node Cluster.. Hello Guys, I am totally in clustering field and recently involved in a project related to Setting Up a Red Hat Cluster. I have two RHEL 4.0 Update 2 Servers which I have installed with the following packages each : ccs-1.0.6-0.x86_64.rpm cman-1.0.8-0.x86_64.rpm cman-kernel-smp-2.6.9-39.5.x86_64.rpm cman-kernel-smp-2.6.9-44.7.x86_64.rpm device-mapper-1.02.25-1.el4.x86_64.rpm dlm-1.0.1-1.x86_64.rpm dlm-kernel-smp-2.6.9-37.7.x86_64.rpm dlm-kernel-smp-2.6.9-39.1.x86_64.rpm dlm-kernel-smp-2.6.9-42.7.x86_64.rpm dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.21-7.el4.x86_64.rpm lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm magma-1.0.5-0.x86_64.rpm magma-plugins-1.0.8-0.x86_64.rpm rgmanager-1.9.50-0.x86_64.rpm system-config-cluster-1.0.27-1.0.noarch.rpm system-config-cluster-1[1].0.27-1.0.noarch.rpm perl-Crypt-SSLeay-0.51-5.x86_64.rpm On 10.14.236.106 I ran # system-config-cluster and I added the two Node - One itself(10.14.236.106) and the other(10.14.236.108). I added The ILO as my Fencing Device providing the right credentials.I dint added any Resource and Service as I just want to test whether the two amchines sees wach other or not. I saved the file and it gave me cluster.conf. Next I ran #service ccsd start #service cman start That Brought out Cluster Management Option next to Cluster Configuration label. I transported the cluster.conf manually through scp to the next machine. Now I too ran the ccsd and cman on the other machine. Then I ran #service fenced start #service rgmanager start One by one to the two machine. When I ran the command: Member Status: Quorate Member Name Status ------ ---- ------ BL02DL385 Online, rgmanager BL01DL385 Online, Local, rgmanager [root at BL01DL385 ~]# So My Nodes are seeing each other.Upto this Its Fine. Now I have one script called tester.sh placed in 106 machine and All I am adding it to Script Section under Resource giving the full path. Now Again I am restarting the service in order. Now The Cluster.conf file is same in both the system Say,if I reboot the 106 system, Will the next Server show running the script????? Please Advise. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nkhare.lists at gmail.com Wed Jul 9 06:55:30 2008 From: nkhare.lists at gmail.com (Neependra Khare) Date: Wed, 09 Jul 2008 12:25:30 +0530 Subject: [Linux-cluster] Setting Up Two Node Cluster.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com> Message-ID: <487460E2.7070800@gmail.com> Singh Raina, Ajeet wrote: > > So My Nodes are seeing each other.Upto this Its Fine. > > Now I have one script called tester.sh placed in 106 machine and All I > am adding it to Script Section under Resource giving the full path. > I think you need to attach that script resource to a service , so that rgmanager can check the status at regular interval. Make sure the script is LSB compliant. http://refspecs.freestandards.org/LSB_2.0.1/LSB-Core/LSB-Core/iniscrptact.html http://sources.redhat.com/cluster/wiki/FAQ/RGManager#rgm_wontrestart > > Now Again I am restarting the service in order. > > Now The Cluster.conf file is same in both the system > > Say,if I reboot the 106 system, Will the next Server show running the > script????? > > The question is not clear to me.Can you please give more details? Neependra From swhiteho at redhat.com Wed Jul 9 08:44:24 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Wed, 09 Jul 2008 09:44:24 +0100 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080708221533.GI15038@fieldses.org> References: <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> Message-ID: <1215593064.3411.6.camel@localhost.localdomain> Hi, On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: > On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: > > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > > > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > > > > - write(control_fd, in, sizeof(struct gdlm_plock_info)); > > > > + write(control_fd, in, sizeof(struct dlm_plock_info)); > > > > > > Gah, sorry, I keep fixing that and it keeps reappearing. > > > > > > > > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > > > > > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > > > > in "D" state in dlm_rcom_status(), so I guess the second node isn't > > > > getting some dlm reply it expects? > > > > > > dlm inter-node communication is not working here for some reason. There > > > must be something unusual with the way the network is configured on the > > > nodes, and/or a problem with the way the cluster code is applying the > > > network config to the dlm. > > > > > > Ah, I just remembered what this sounds like; we see this kind of thing > > > when a network interface has multiple IP addresses, and/or routing is > > > configured strangely. Others cc'ed could offer better details on exactly > > > what to look for. > > > > OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on > > neither, and it's entirely likely there's some obvious misconfiguration. > > On the kvm host there are 4 virtual interfaces bridged together: > > I ran wireshark on vnet0 while doing the second mount; what I saw was > the second machine opened a tcp connection to port 21064 on the first > (which had already completed the mount), and sent it a single message > identified by wireshark as "DLM3" protocol, type recovery command: > status command. It got back an ACK then a RST. > > Then the same happened in the other direction, with the first machine > sending a similar message to port 21064 on the second, which then reset > the connection. > > --b. > An ACK & RST for the same packet? Or was than an ACK SYN for the SYN and then an RST for the following data packet? Could you post the trace or put it somewhere we can see it? Steve. From ccaulfie at redhat.com Wed Jul 9 08:51:02 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Wed, 09 Jul 2008 09:51:02 +0100 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <1215593064.3411.6.camel@localhost.localdomain> References: <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> Message-ID: <48747BF6.2060001@redhat.com> Steven Whitehouse wrote: > Hi, > > On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>> >>>> >>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>> getting some dlm reply it expects? >>>> dlm inter-node communication is not working here for some reason. There >>>> must be something unusual with the way the network is configured on the >>>> nodes, and/or a problem with the way the cluster code is applying the >>>> network config to the dlm. >>>> >>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>> when a network interface has multiple IP addresses, and/or routing is >>>> configured strangely. Others cc'ed could offer better details on exactly >>>> what to look for. >>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>> neither, and it's entirely likely there's some obvious misconfiguration. >>> On the kvm host there are 4 virtual interfaces bridged together: >> I ran wireshark on vnet0 while doing the second mount; what I saw was >> the second machine opened a tcp connection to port 21064 on the first >> (which had already completed the mount), and sent it a single message >> identified by wireshark as "DLM3" protocol, type recovery command: >> status command. It got back an ACK then a RST. >> >> Then the same happened in the other direction, with the first machine >> sending a similar message to port 21064 on the second, which then reset >> the connection. >> That's a symptom of the "connect from non-cluster node" error in the DLM. It's got a connection from an IP address that is not known to cman. So it closes it as a spoofer. You'll need to check the routing of the interfaces. The most common cause of this sort of error is having two interfaces on the same physical (or internal) network. -- Chrissie From swhiteho at redhat.com Wed Jul 9 08:50:31 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Wed, 09 Jul 2008 09:50:31 +0100 Subject: [Linux-cluster] lock_dlm to lock_nolock In-Reply-To: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com> References: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com> Message-ID: <1215593431.3411.9.camel@localhost.localdomain> Hi, On Tue, 2008-07-08 at 13:20 -0400, Ozgur Akan wrote: > Hi, > > Can I mount a gfs filesystem formatted with lock_dlmlock and use it > without a problem in the cluster if I have proper fencing and that fs > is mounted to only one node at a time? > Single node DLM is quite possible, and I use it for testing from time to time. Below though you appear to be using lock_nolock which is also ok provided you only use it on one node at a time, Steve. > mount -o > lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/ > > > [root at rhtest01 ~]# ./ping -rw /gfstwo/test 1 > data increment = 1 > 140012 locks/sec > [root at rhtest01 ~]# gfs2_tool df /gfstwo/ > /gfstwo: > SB lock proto = "lock_dlm" > SB lock table = "testcluster:gfstwo" > SB ondisk format = 1801 > SB multihost format = 1900 > Block size = 4096 > Journals = 3 > Resource Groups = 60 > Mounted lock proto = "lock_nolock" > Mounted lock table = "testcluster:gfstwo" > Mounted host data = "" > Journal number = 0 > Lock module flags = 1 > Local flocks = TRUE > > > thanks, > Ozgur Akan > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From swhiteho at redhat.com Wed Jul 9 08:51:01 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Wed, 09 Jul 2008 09:51:01 +0100 Subject: [Linux-cluster] Re: quota and noatime configurations In-Reply-To: References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com> <1215510236.3475.0.camel@localhost.localdomain> Message-ID: <1215593461.3411.11.camel@localhost.localdomain> Hi, On Tue, 2008-07-08 at 18:12 +0100, Andrew Price wrote: > On 08/07/08 10:43, Steven Whitehouse wrote: > > On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote: > >> Hi, > >> > >> Even I have ' options="noatime,quota=off" ' in my cluster.conf > >> file, > >> I see [gfs2_quotad] running and > >> > >> I can not see quota in mtab file > >> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id > >> =196610:first=1 0 0 > >> > >> is this normal? > >> > > Yes, it will likely only appear if you turn it on since the default is > > off, > > If I'm reading the code correctly, gfs2_quotad is always started > regardless of the quota options. > Yes, thats also true, Steve. From ajeet.singh.raina at logica.com Wed Jul 9 09:56:40 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 15:26:40 +0530 Subject: [Linux-cluster] Alternative to Shared Storage.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Wed Jul 9 09:58:37 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 15:28:37 +0530 Subject: [Linux-cluster] Setting Up Two Node Cluster.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17931@in-ex004.groupinfra.com> It Done.Just Started the Service on both the node and Failover is taking place. Thanks anyway. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Prakash.P at lsi.com Wed Jul 9 10:09:41 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Wed, 9 Jul 2008 18:09:41 +0800 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com> Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From breeves at redhat.com Wed Jul 9 10:09:00 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 09 Jul 2008 11:09:00 +0100 Subject: [Linux-cluster] Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> Message-ID: <48748E3C.5060002@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Singh Raina, Ajeet wrote: > Hello Guys, > > > > Just Now I have been successful in configuring the two Node Fail-over Cluster. > It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna You probably want to evaluate something a little newer - RHEL-4.2 was released some time ago and there have been significant fixes and feature enhancements in the releases since that time. > Now Let me tell you I don?t have Shared Storage.Is there any alternative for that. > > Somewhere I read about iSCSI but donnno whether it will be helpful. I use software-based iSCSI on pretty much all my test systems - it works great. You need the iSCSI initiator package installed on the systems that will import the devices and an iSCSI target installed on the host that exports the storage. There are several target projects out there in varying states of completeness and functionality. I've used iet (iSCSI enterprise target) on RHEL4 and there is now also stgt (scsi target utils) which is included in the Cluster Storage channel for RHEL5. > Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? http://stgt.berlios.de/ http://iscsitarget.sourceforge.net/ RHEL5 also supports installing to and booting from software iSCSI targets. Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iEYEARECAAYFAkh0jjwACgkQ6YSQoMYUY94AnACgnmUhUZ1vB8lqH2je14KdJEu5 p/IAoNfzvAiW1YGPFwahk5PAcXfVYzu/ =ZHpD -----END PGP SIGNATURE----- From ajeet.singh.raina at logica.com Wed Jul 9 10:54:04 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 16:24:04 +0530 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com> Hi, I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new. Can You provide me with steps by steps docs. One more thing - What Minimum Size of Hard Disk we need for That? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 3:40 PM To: linux clustering Subject: [Linux-cluster] RE: Alternative to Shared Storage.. Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Wed Jul 9 10:56:55 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 16:26:55 +0530 Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com> Just for Information, How Will we configure NAS Concept you said earlier. What Should I share actually? Are you Talking about the Script? ________________________________ From: Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:24 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage.. Hi, I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new. Can You provide me with steps by steps docs. One more thing - What Minimum Size of Hard Disk we need for That? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 3:40 PM To: linux clustering Subject: [Linux-cluster] RE: Alternative to Shared Storage.. Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From breeves at redhat.com Wed Jul 9 10:54:51 2008 From: breeves at redhat.com (Bryn M. Reeves) Date: Wed, 09 Jul 2008 11:54:51 +0100 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com> Message-ID: <487498FB.50907@redhat.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Singh Raina, Ajeet wrote: > Hi, > > > > I would like to go for iSCSI Configuration.That sound good.Atleast I will learn > something new. > > Can You provide me with steps by steps docs. > > One more thing ? What Minimum Size of Hard Disk we need for That? You don't - you can create iSCSI devices using either disk partitions if you have some spare, or just a file located in any file system with enough free space. I often do testing with iSCSI devices that are just a few 10s of MiB in size. Regards, Bryn. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iEYEARECAAYFAkh0mPsACgkQ6YSQoMYUY94JcwCgqF2K9a8GrrHLfdW9a9LLqrjt b/wAoLjvKMIA2l0NOBc8+fYl2zzGg7t7 =lzRW -----END PGP SIGNATURE----- From Prakash.P at lsi.com Wed Jul 9 11:34:41 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Wed, 9 Jul 2008 19:34:41 +0800 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com> References: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com> <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7AC0@hkgmail01.lsi.com> Google iSCSI Enterprise target & Open iSCSI Initiator. They have their own How-To's & documentation which will help you. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:24 PM To: linux clustering Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage.. Hi, I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new. Can You provide me with steps by steps docs. One more thing - What Minimum Size of Hard Disk we need for That? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 3:40 PM To: linux clustering Subject: [Linux-cluster] RE: Alternative to Shared Storage.. Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Prakash.P at lsi.com Wed Jul 9 11:39:05 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Wed, 9 Jul 2008 19:39:05 +0800 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7AC2@hkgmail01.lsi.com> You should create a directory in Spare server & export that directory as NFS Share. On the cluster nodes there should be an option to create NFS resource. This will mount the Shared directory in your cluster node. So you are going to use that exported directory as Share. Then if you wish you can copy the required scripts into that directory & run the scripts from there hence it will provide you the flexibility of failover & failback. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:27 PM To: linux clustering Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage.. Just for Information, How Will we configure NAS Concept you said earlier. What Should I share actually? Are you Talking about the Script? ________________________________ From: Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:24 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage.. Hi, I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new. Can You provide me with steps by steps docs. One more thing - What Minimum Size of Hard Disk we need for That? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 3:40 PM To: linux clustering Subject: [Linux-cluster] RE: Alternative to Shared Storage.. Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From singh.rajeshwar at gmail.com Wed Jul 9 12:16:22 2008 From: singh.rajeshwar at gmail.com (Rajeshwar Singh) Date: Wed, 9 Jul 2008 17:46:22 +0530 Subject: [Linux-cluster] Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> Message-ID: Hi, You you can use freeNAS to emulate an intel/amd machine as NAS and do all the testing (iscsi and nfs and cifs) of protocols. regards 2008/7/9 Singh Raina, Ajeet : > Hello Guys, > > > > Just Now I have been successful in configuring the two Node Fail-over > Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you > people gonna help me out.Since This was small two Node Cluster Setup.I have > few script which I am running on one primary server and on disabling > Ethernet on one , the other is taking responsibility to start the same > service plus rebooting the disabled system > > That is working fine. > > > > Now Let me tell you I don't have Shared Storage.Is there any alternative > for that. > > Somewhere I read about iSCSI but donnno whether it will be helpful. > > > > I have one RHEL System of 40 GB. Can I make it Shared Storage. > > Its Just a matter of Testing a script. > > Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? > > This e-mail and any attachment is for authorised use by the intended > recipient(s) only. It may contain proprietary material, confidential > information and/or be subject to legal privilege. It should not be copied, > disclosed to, retained or used by, any other party. If you are not an > intended recipient then please promptly delete this e-mail and any > attachment and all copies and inform the sender. Thank you. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mm at yuhu.biz Wed Jul 9 12:30:04 2008 From: mm at yuhu.biz (Marian Marinov) Date: Wed, 9 Jul 2008 15:30:04 +0300 Subject: [Linux-cluster] Alternative to Shared Storage.. In-Reply-To: References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> Message-ID: <200807091530.04510.mm@yuhu.biz> You can also create your own shared storage using glusterfs. This way the only thing you will need is a FUSE support in your kernels. Without touching anything else on the system. regards Marian Marinov On Wednesday 09 July 2008 15:16:22 Rajeshwar Singh wrote: > Hi, > You you can use freeNAS to emulate an intel/amd machine as NAS and do all > the testing (iscsi and nfs and cifs) of protocols. > > regards > > 2008/7/9 Singh Raina, Ajeet : > > Hello Guys, > > > > > > > > Just Now I have been successful in configuring the two Node Fail-over > > Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know > > you people gonna help me out.Since This was small two Node Cluster > > Setup.I have few script which I am running on one primary server and on > > disabling Ethernet on one , the other is taking responsibility to start > > the same service plus rebooting the disabled system > > > > That is working fine. > > > > > > > > Now Let me tell you I don't have Shared Storage.Is there any alternative > > for that. > > > > Somewhere I read about iSCSI but donnno whether it will be helpful. > > > > > > > > I have one RHEL System of 40 GB. Can I make it Shared Storage. > > > > Its Just a matter of Testing a script. > > > > Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? > > > > This e-mail and any attachment is for authorised use by the intended > > recipient(s) only. It may contain proprietary material, confidential > > information and/or be subject to legal privilege. It should not be > > copied, disclosed to, retained or used by, any other party. If you are > > not an intended recipient then please promptly delete this e-mail and any > > attachment and all copies and inform the sender. Thank you. > > > > -- > > Linux-cluster mailing list > > Linux-cluster at redhat.com > > https://www.redhat.com/mailman/listinfo/linux-cluster From ajeet.singh.raina at logica.com Wed Jul 9 13:22:07 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Wed, 9 Jul 2008 18:52:07 +0530 Subject: [Linux-cluster] RE: Alternative to Shared Storage.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7AC2@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17936@in-ex004.groupinfra.com> Hi, I attempted doing NFS setup.What I did is I have one more RHEL Machine where I ran the following command: # vi /etc/exports /datashare *(rw,sync,no_root_squash) #service portmap restart #service nfs restart [root at pe ~]# exportfs /datashare Is it fine? Now, I went to the two nodes and tried : [root at 1014236106 ~]# showmount -e 10.14.236.169 Export list for 10.14.236.169: /datashare * [root at 1014236106 ~]# The Same Shared Being shown by the second Cluster Node. Now I opened : #system-config-cluster > Went to Cluster Configuration > Add New Resource. Now I am confused. There are three Options: 1. NFS Mount 2. NFS Exports 3. NFS Client. When I attempted doing NFS Exports , It just says NAME OF EXPORT CONFIGURATION...Whats That Now? Is it same as entry as /datashare.Or Otherwise Else? I need to Choose NFS Mount or Exports or NFS Client. Let me tell you the condition again.I have two Cluster Nodes and Am using NFS as Alternative Shared Storage. Pls Help. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 5:09 PM To: linux clustering Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage.. You should create a directory in Spare server & export that directory as NFS Share. On the cluster nodes there should be an option to create NFS resource. This will mount the Shared directory in your cluster node. So you are going to use that exported directory as Share. Then if you wish you can copy the required scripts into that directory & run the scripts from there hence it will provide you the flexibility of failover & failback. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:27 PM To: linux clustering Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage.. Just for Information, How Will we configure NAS Concept you said earlier. What Should I share actually? Are you Talking about the Script? ________________________________ From: Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 4:24 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage.. Hi, I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new. Can You provide me with steps by steps docs. One more thing - What Minimum Size of Hard Disk we need for That? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Wednesday, July 09, 2008 3:40 PM To: linux clustering Subject: [Linux-cluster] RE: Alternative to Shared Storage.. Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume. And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers. If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically. Regards, Prakash ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Wednesday, July 09, 2008 3:27 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] Alternative to Shared Storage.. Hello Guys, Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system That is working fine. Now Let me tell you I don't have Shared Storage.Is there any alternative for that. Somewhere I read about iSCSI but donnno whether it will be helpful. I have one RHEL System of 40 GB. Can I make it Shared Storage. Its Just a matter of Testing a script. Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfilipek at crscold.com Wed Jul 9 13:51:57 2008 From: bfilipek at crscold.com (Brad Filipek) Date: Wed, 9 Jul 2008 08:51:57 -0500 Subject: [Linux-cluster] Basic 2 node NFS cluster setup help Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com> I am a little unsure on how to properly setup an NFS export on my 2 node cluster. I have 1 service in cluster manager called "cluster" and 4 resources: 1) Virtual IP of 172.25.7.10 (which binds to eth0) 2) Virtual IP of 172.25.8.10 (which binds to eth1) 3) ext3 file system mount at /SAN/LogVol2 called "data" 4) ext3 file system mount at /SAN/LogVol3 called "shared" When I start the cluster services using just these 4 resources assiged to my one service called "cluster", everything starts up and works fine. What I need to do is assign 3 NFS exports: /SAN/LogVol3/files webserver(ro,sync) /SAN/LogVol3/webup webserver(rw,sync) /SAN/LogVol2/webdown webserver(ro,sync) Do I need to create 3 new "NFS Export" resources for these? When I select the "NFS Export" option within cluster suite, I only have one field to fill in - Name. It does not let me select the path that I want to export and which options to allow such as the host, ro or rw, etc. I am just trying to make the above exports available on my cluster's virtual IP of 172.25.7.10 instead of setting it up on each of the two nodes and manually starting the NFS service on whichever node is active in the cluster. Do I still need to create an /etc/exports file with all 3 of these entries on each node? Or is there a config file somewhere else? I read the NFS cookbook but it explains how to setup NFS using multiple services (I only have one service) with active/active GFS (I am using EXT3 in active/passive). Thanks in advance for any help. Brad Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmacfarland at nexatech.com Wed Jul 9 15:28:37 2008 From: jmacfarland at nexatech.com (Jeff Macfarland) Date: Wed, 09 Jul 2008 10:28:37 -0500 Subject: [Linux-cluster] Alternative to Shared Storage.. In-Reply-To: <48748E3C.5060002@redhat.com> References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com> <48748E3C.5060002@redhat.com> Message-ID: <4874D925.8010000@nexatech.com> Do any of the software targets yet support scsi reservations? The one I work with mostly (iet) unfortunately does not. Bryn M. Reeves wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Singh Raina, Ajeet wrote: >> Hello Guys, >> >> >> >> Just Now I have been successful in configuring the two Node Fail-over Cluster. >> It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna > > You probably want to evaluate something a little newer - RHEL-4.2 was > released some time ago and there have been significant fixes and feature > enhancements in the releases since that time. > >> Now Let me tell you I don?t have Shared Storage.Is there any alternative for that. >> >> Somewhere I read about iSCSI but donnno whether it will be helpful. > > I use software-based iSCSI on pretty much all my test systems - it works > great. You need the iSCSI initiator package installed on the systems > that will import the devices and an iSCSI target installed on the host > that exports the storage. There are several target projects out there in > varying states of completeness and functionality. I've used iet (iSCSI > enterprise target) on RHEL4 and there is now also stgt (scsi target > utils) which is included in the Cluster Storage channel for RHEL5. > >> Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that? > > http://stgt.berlios.de/ > http://iscsitarget.sourceforge.net/ > > RHEL5 also supports installing to and booting from software iSCSI targets. > > Regards, > Bryn. > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (GNU/Linux) > Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org > > iEYEARECAAYFAkh0jjwACgkQ6YSQoMYUY94AnACgnmUhUZ1vB8lqH2je14KdJEu5 > p/IAoNfzvAiW1YGPFwahk5PAcXfVYzu/ > =ZHpD > -----END PGP SIGNATURE----- -- Jeff Macfarland (jmacfarland at nexatech.com) Nexa Technologies - 972.747.8879 Systems Administrator GPG Key ID: 0x5F1CA61B GPG Key Server: hkp://wwwkeys.pgp.net From bfields at fieldses.org Wed Jul 9 15:40:04 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Wed, 9 Jul 2008 11:40:04 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <48747BF6.2060001@redhat.com> References: <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> Message-ID: <20080709154004.GC5780@fieldses.org> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: > Steven Whitehouse wrote: >> Hi, >> >> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>>> >>>>> >>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>>> getting some dlm reply it expects? >>>>> dlm inter-node communication is not working here for some reason. There >>>>> must be something unusual with the way the network is configured on the >>>>> nodes, and/or a problem with the way the cluster code is applying the >>>>> network config to the dlm. >>>>> >>>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>>> when a network interface has multiple IP addresses, and/or routing is >>>>> configured strangely. Others cc'ed could offer better details on exactly >>>>> what to look for. >>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>>> neither, and it's entirely likely there's some obvious misconfiguration. >>>> On the kvm host there are 4 virtual interfaces bridged together: >>> I ran wireshark on vnet0 while doing the second mount; what I saw was >>> the second machine opened a tcp connection to port 21064 on the first >>> (which had already completed the mount), and sent it a single message >>> identified by wireshark as "DLM3" protocol, type recovery command: >>> status command. It got back an ACK then a RST. >>> >>> Then the same happened in the other direction, with the first machine >>> sending a similar message to port 21064 on the second, which then reset >>> the connection. >>> > > That's a symptom of the "connect from non-cluster node" error in the > DLM. I think I am getting a message to that affect in my logs. > It's got a connection from an IP address that is not known to cman. > So it closes it as a spoofer OK. Is there an easy way to see the list of ip addresses known to cman? > You'll need to check the routing of the interfaces. The most common > cause of this sort of error is having two interfaces on the same > physical (or internal) network. Thanks, that's helpful. --b. From bfields at fieldses.org Wed Jul 9 15:29:46 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Wed, 9 Jul 2008 11:29:46 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <1215593064.3411.6.camel@localhost.localdomain> References: <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> Message-ID: <20080709152946.GB5780@fieldses.org> On Wed, Jul 09, 2008 at 09:44:24AM +0100, Steven Whitehouse wrote: > Hi, > > On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: > > On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: > > > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > > > > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > > > > > - write(control_fd, in, sizeof(struct gdlm_plock_info)); > > > > > + write(control_fd, in, sizeof(struct dlm_plock_info)); > > > > > > > > Gah, sorry, I keep fixing that and it keeps reappearing. > > > > > > > > > > > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > > > > > > > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > > > > > in "D" state in dlm_rcom_status(), so I guess the second node isn't > > > > > getting some dlm reply it expects? > > > > > > > > dlm inter-node communication is not working here for some reason. There > > > > must be something unusual with the way the network is configured on the > > > > nodes, and/or a problem with the way the cluster code is applying the > > > > network config to the dlm. > > > > > > > > Ah, I just remembered what this sounds like; we see this kind of thing > > > > when a network interface has multiple IP addresses, and/or routing is > > > > configured strangely. Others cc'ed could offer better details on exactly > > > > what to look for. > > > > > > OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on > > > neither, and it's entirely likely there's some obvious misconfiguration. > > > On the kvm host there are 4 virtual interfaces bridged together: > > > > I ran wireshark on vnet0 while doing the second mount; what I saw was > > the second machine opened a tcp connection to port 21064 on the first > > (which had already completed the mount), and sent it a single message > > identified by wireshark as "DLM3" protocol, type recovery command: > > status command. It got back an ACK then a RST. > > > > Then the same happened in the other direction, with the first machine > > sending a similar message to port 21064 on the second, which then reset > > the connection. > > > > --b. > > > An ACK & RST for the same packet? Or was than an ACK SYN for the SYN and > then an RST for the following data packet? Could you post the trace or > put it somewhere we can see it? Sure, thanks. It's at http://www.fieldses.org/~bfields/failed-dlm.pcap http://www.fieldses.org/~bfields/failed-dlm-filtered.pcap (The second is just the dlm traffic, with all the ais, ssh, dns, etc. filtered out.) --b. From ccaulfie at redhat.com Wed Jul 9 15:50:14 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Wed, 09 Jul 2008 16:50:14 +0100 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080709154004.GC5780@fieldses.org> References: <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> Message-ID: <4874DE36.6030704@redhat.com> J. Bruce Fields wrote: > On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: >> Steven Whitehouse wrote: >>> Hi, >>> >>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>>>> >>>>>> >>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>>>> getting some dlm reply it expects? >>>>>> dlm inter-node communication is not working here for some reason. There >>>>>> must be something unusual with the way the network is configured on the >>>>>> nodes, and/or a problem with the way the cluster code is applying the >>>>>> network config to the dlm. >>>>>> >>>>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>>>> when a network interface has multiple IP addresses, and/or routing is >>>>>> configured strangely. Others cc'ed could offer better details on exactly >>>>>> what to look for. >>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>>>> neither, and it's entirely likely there's some obvious misconfiguration. >>>>> On the kvm host there are 4 virtual interfaces bridged together: >>>> I ran wireshark on vnet0 while doing the second mount; what I saw was >>>> the second machine opened a tcp connection to port 21064 on the first >>>> (which had already completed the mount), and sent it a single message >>>> identified by wireshark as "DLM3" protocol, type recovery command: >>>> status command. It got back an ACK then a RST. >>>> >>>> Then the same happened in the other direction, with the first machine >>>> sending a similar message to port 21064 on the second, which then reset >>>> the connection. >>>> >> That's a symptom of the "connect from non-cluster node" error in the >> DLM. > > I think I am getting a message to that affect in my logs. > >> It's got a connection from an IP address that is not known to cman. >> So it closes it as a spoofer > > OK. Is there an easy way to see the list of ip addresses known to cman? yes, cman_tool nodes -a will show you all the nodes and their known IP addresses -- Chrissie From jerlyon at gmail.com Wed Jul 9 16:04:39 2008 From: jerlyon at gmail.com (Jeremy Lyon) Date: Wed, 9 Jul 2008 10:04:39 -0600 Subject: [Linux-cluster] clustat requires root Message-ID: <779919740807090904u77b0b602q8eca5409665ca018@mail.gmail.com> Hi, I just noticed that in RHEL 4 clustat could be run by any user and now in RHEL 5 it requires root. Was this done on purpose or is it a by product of the changes of cluster from v1 -> v2? Is there anything that can be done to allow a user to run clustat without sudo. I don't think I want to set it with the suid bit. RHEL4: rhel4:/u/oracle> /usr/sbin/clustat Member Status: Quorate Member Name Status ------ ---- ------ rhel4-2 Online, rgmanager rhel4 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- griddnvr rhel4 started fibrbase rhel4 started pcms2 rhel4 started notifprd rhel4 started qtprod (none) disabled rhel4:/u/oracle> /usr/sbin/clustat -v clustat version 1.9.72 Connected via: CMAN/SM Plugin v1.1.7.4 rhel4:/u/oracle> rpm -q rgmanager rgmanager-1.9.72-1 RHEL5: rhel5 /u/oracle> /usr/sbin/clustat Could not connect to CMAN: Permission denied rhel5 /u/oracle> /usr/sbin/clustat -v Could not connect to CMAN: Permission denied rhel5 /u/oracle> rpm -q rgmanager rgmanager-2.0.38-2.el5_2.1 [root at rhel5 ~]# clustat -v clustat version DEVEL TIA -Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From bfields at fieldses.org Wed Jul 9 16:32:22 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Wed, 9 Jul 2008 12:32:22 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <4874DE36.6030704@redhat.com> References: <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> Message-ID: <20080709163222.GF5780@fieldses.org> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote: > J. Bruce Fields wrote: >> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: >>> Steven Whitehouse wrote: >>>> Hi, >>>> >>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>>>>> >>>>>>> >>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>>>>> getting some dlm reply it expects? >>>>>>> dlm inter-node communication is not working here for some reason. There >>>>>>> must be something unusual with the way the network is configured on the >>>>>>> nodes, and/or a problem with the way the cluster code is applying the >>>>>>> network config to the dlm. >>>>>>> >>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>>>>> when a network interface has multiple IP addresses, and/or routing is >>>>>>> configured strangely. Others cc'ed could offer better details on exactly >>>>>>> what to look for. >>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>>>>> neither, and it's entirely likely there's some obvious misconfiguration. >>>>>> On the kvm host there are 4 virtual interfaces bridged together: >>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was >>>>> the second machine opened a tcp connection to port 21064 on the first >>>>> (which had already completed the mount), and sent it a single message >>>>> identified by wireshark as "DLM3" protocol, type recovery command: >>>>> status command. It got back an ACK then a RST. >>>>> >>>>> Then the same happened in the other direction, with the first machine >>>>> sending a similar message to port 21064 on the second, which then reset >>>>> the connection. >>>>> >>> That's a symptom of the "connect from non-cluster node" error in the >>> DLM. >> >> I think I am getting a message to that affect in my logs. >> >>> It's got a connection from an IP address that is not known to cman. >>> So it closes it as a spoofer >> >> OK. Is there an easy way to see the list of ip addresses known to cman? > > yes, > > cman_tool nodes -a > > will show you all the nodes and their known IP addresses piglet2:~# cman_tool nodes -a Node Sts Inc Joined Name 1 M 376 2008-07-09 12:30:32 piglet1 Addresses: 192.168.122.129 2 M 368 2008-07-09 12:30:31 piglet2 Addresses: 192.168.122.130 3 M 380 2008-07-09 12:30:33 piglet3 Addresses: 192.168.122.131 4 M 372 2008-07-09 12:30:31 piglet4 Addresses: 192.168.122.132 These addresses are correct (and are the same addresses that show up in the packet trace). I must be overlooking something very obvious.... --b. From wcarty at gmail.com Wed Jul 9 17:54:52 2008 From: wcarty at gmail.com (Wayne Carty) Date: Wed, 9 Jul 2008 13:54:52 -0400 Subject: [Linux-cluster] Freezing GFS mount in a cluster In-Reply-To: <004a01c8e0f5$b6d8ccd0$248a6670$@net> References: <004a01c8e0f5$b6d8ccd0$248a6670$@net> Message-ID: I'm currently using the same ISCSI san with a 2 node cluster and not having any issues. I'm currently running Centos 4.6. Are you using conga to manage your cluster? what does clustat show when you run it before mounting you gfs filesystem. I'm also using manual fencing and so far I'm not having a problem. Here is a look at my config. I'm not using it to run any services or to mount the filesystems. It's just basic. ~ On Tue, Jul 8, 2008 at 8:25 AM, Kees Hoekzema wrote: > Hello List, > > Recently we bought an Dell MD3000 iSCSI storage system and we are trying to > get GFS running on it. I have 3 test servers hooked up to the MD3000i and I > have the cluster working, including multipath and different paths. > > When I had the cluster up with all 3 nodes in the fence domain and > cman_tool > status reporting 3 nodes I made a GFS partition and formatted it: > # gfs_mkfs -j 10 -p lock_dlm -t tweakers:webdata /dev/mapper/webdata-part1 > > This worked and I could mount the filesystem on the server I made it on. > However, as soon as I tried to mount it on one of the two other servers, I > would get a freeze and get fenced. After a fresh reboot of the complete > cluster I tried to mount it again. The first server could mount it, but any > server that would try to mount it with the first server having the gfs > mounted would crash. > > As I'm fairly new to cman/fencing/gfs-clusters, I was wondering if this is > something 'silly' configuration error, or that there is something seriously > wrong. > > Another thing I would like to know is where to get debug information. Right > now there is not a lot debug information available, or at least I couldn't > find it. One thing that particularly annoyed me was the ' Waiting for > fenced > to join the fence group.' message which didn't come with any explanation > whatsoever. That message finally went away when I powered up the two other > servers and started the cluster on all three simultaneously. > > Anyway, my cluster config for this testing. I use manual fencing for > testing as the environment I test it in does not have exactly the same > hardware as I have in the production environment. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Conclusion: > - why can't I mount GFS on another server, when it is mounted on one? > - how do I get more debug information (ie: reason why a server can't join a > fence domein. Or the reason why a server gets fenced). > > Thank you all for your time, > > Kees Hoekzema > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Wayne Carty -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 04:26:04 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 09:56:04 +0530 Subject: [Linux-cluster] Knowing Cluster Version.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7AC0@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17937@in-ex004.groupinfra.com> I have two RHEL 4.0 Update 2 Servers which I have installed with the following packages each : ccs-1.0.6-0.x86_64.rpm cman-1.0.8-0.x86_64.rpm cman-kernel-smp-2.6.9-39.5.x86_64.rpm cman-kernel-smp-2.6.9-44.7.x86_64.rpm device-mapper-1.02.25-1.el4.x86_64.rpm dlm-1.0.1-1.x86_64.rpm dlm-kernel-smp-2.6.9-37.7.x86_64.rpm dlm-kernel-smp-2.6.9-39.1.x86_64.rpm dlm-kernel-smp-2.6.9-42.7.x86_64.rpm dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm lvm2-cluster-2.02.21-7.el4.x86_64.rpm lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm magma-1.0.5-0.x86_64.rpm magma-plugins-1.0.8-0.x86_64.rpm rgmanager-1.9.50-0.x86_64.rpm system-config-cluster-1.0.27-1.0.noarch.rpm system-config-cluster-1[1].0.27-1.0.noarch.rpm perl-Crypt-SSLeay-0.51-5.x86_64.rpm What will be my Cluster Version?How to Check That? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nerix at free.fr Thu Jul 10 08:25:02 2008 From: nerix at free.fr (eric) Date: Thu, 10 Jul 2008 10:25:02 +0200 Subject: [Linux-cluster] two node cluster update Message-ID: <4875C75E.7000803@free.fr> Hi list, I'd like to know if there is a "best practice" for updating* a two-node (active/passive) cluster with qdisk. May I start with the passive node ? Can it become dangerous if the passive node run differents packages from the active node ? Here are my packages to update. from | to ------------------------------------------------------------------------------------------------------ openais-0.80.3-7.el5 | openais 0.80.3-15.el5 cman-2.0.73-1.el5_1.1 | cman 2.0.84-2.el5 rgmanager-2.0.31-1.el5 | rgmanager 2.0.38-2.el5_2.1 ricci-0.10.0-6.el5 | ricci 0.12.0-7.el5.centos.3 modcluster-0.10.0-5.el5 | modcluster 0.12.0-7.el5.centos Thanks. Eric. *updating from CentOS5.0 to CentOS5.2 (yum update). From ajeet.singh.raina at logica.com Thu Jul 10 08:52:03 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 14:22:03 +0530 Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com> I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at bl04mpdsk ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Prakash.P at lsi.com Thu Jul 10 09:07:58 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Thu, 10 Jul 2008 17:07:58 +0800 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com> ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at bl04mpdsk ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 09:11:50 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 14:41:50 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com> Shall I need to mention Lun 0 ? is it needed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at bl04mpdsk ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Prakash.P at lsi.com Thu Jul 10 09:18:06 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Thu, 10 Jul 2008 17:18:06 +0800 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com> References: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com> <0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C6E@hkgmail01.lsi.com> ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at bl04mpdsk ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccaulfie at redhat.com Thu Jul 10 09:26:54 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Thu, 10 Jul 2008 10:26:54 +0100 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080709163222.GF5780@fieldses.org> References: <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> Message-ID: <4875D5DE.7030601@redhat.com> J. Bruce Fields wrote: > On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote: >> J. Bruce Fields wrote: >>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: >>>> Steven Whitehouse wrote: >>>>> Hi, >>>>> >>>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >>>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>>>>>> >>>>>>>> >>>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>>>>>> getting some dlm reply it expects? >>>>>>>> dlm inter-node communication is not working here for some reason. There >>>>>>>> must be something unusual with the way the network is configured on the >>>>>>>> nodes, and/or a problem with the way the cluster code is applying the >>>>>>>> network config to the dlm. >>>>>>>> >>>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>>>>>> when a network interface has multiple IP addresses, and/or routing is >>>>>>>> configured strangely. Others cc'ed could offer better details on exactly >>>>>>>> what to look for. >>>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>>>>>> neither, and it's entirely likely there's some obvious misconfiguration. >>>>>>> On the kvm host there are 4 virtual interfaces bridged together: >>>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was >>>>>> the second machine opened a tcp connection to port 21064 on the first >>>>>> (which had already completed the mount), and sent it a single message >>>>>> identified by wireshark as "DLM3" protocol, type recovery command: >>>>>> status command. It got back an ACK then a RST. >>>>>> >>>>>> Then the same happened in the other direction, with the first machine >>>>>> sending a similar message to port 21064 on the second, which then reset >>>>>> the connection. >>>>>> >>>> That's a symptom of the "connect from non-cluster node" error in the >>>> DLM. >>> I think I am getting a message to that affect in my logs. >>> >>>> It's got a connection from an IP address that is not known to cman. >>>> So it closes it as a spoofer >>> OK. Is there an easy way to see the list of ip addresses known to cman? >> yes, >> >> cman_tool nodes -a >> >> will show you all the nodes and their known IP addresses > > piglet2:~# cman_tool nodes -a > Node Sts Inc Joined Name > 1 M 376 2008-07-09 12:30:32 piglet1 > Addresses: 192.168.122.129 > 2 M 368 2008-07-09 12:30:31 piglet2 > Addresses: 192.168.122.130 > 3 M 380 2008-07-09 12:30:33 piglet3 > Addresses: 192.168.122.131 > 4 M 372 2008-07-09 12:30:31 piglet4 > Addresses: 192.168.122.132 > > These addresses are correct (and are the same addresses that show up in the > packet trace). > > I must be overlooking something very obvious.... Hmm, very odd. Are those IP addresses consistent across all nodes in the cluster ? -- Chrissie From ajeet.singh.raina at logica.com Thu Jul 10 09:26:40 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 14:56:40 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C6E@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com> So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at bl04mpdsk ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 10:00:02 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 15:30:02 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com> I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Prakash.P at lsi.com Thu Jul 10 10:09:06 2008 From: Prakash.P at lsi.com (P, Prakash) Date: Thu, 10 Jul 2008 18:09:06 +0800 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com> <0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com> Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C8B@hkgmail01.lsi.com> This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 10:42:59 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 16:12:59 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C8B@hkgmail01.lsi.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17942@in-ex004.groupinfra.com> Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 10:57:39 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 16:27:39 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17942@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17943@in-ex004.groupinfra.com> I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 11:03:22 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 16:33:22 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17943@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com> [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 11:23:18 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 16:53:18 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17946@in-ex004.groupinfra.com> Few issue Still prevailing: As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not seeing any shared on both the cluster nodes. I guess I missed few step: The DOC at end says: Voila! you should now have a new SCSI disc avaiable for use. Now you can use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format the partition (which is out of the scope of this howto). Do I need to do fdisk? Pls Help. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 11:52:40 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 17:22:40 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17946@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17949@in-ex004.groupinfra.com> I tried running this on Client Machine. #tail -f /var/log/messages It Says: Jul 9 12:42:40 BL02DL385 kernel: sda : very big device. try to use READ CAPACITY(16). Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr sectors (0 MB) Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: drive cache: write back Jul 9 12:42:40 BL02DL385 kernel: Attached scsi disk sda at scsi1, channel 0, id 0, lun 0 Jul 9 12:42:40 BL02DL385 scsi.agent[28387]: disk at /devices/platform/host1/target1:0:0/1:0:0:0 Jul 9 12:44:31 BL02DL385 kernel: sda : very big device. try to use READ CAPACITY(16). Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr sectors (0 MB) Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: drive cache: write back ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:53 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Few issue Still prevailing: As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not seeing any shared on both the cluster nodes. I guess I missed few step: The DOC at end says: Voila! you should now have a new SCSI disc avaiable for use. Now you can use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format the partition (which is out of the scope of this howto). Do I need to do fdisk? Pls Help. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Thu Jul 10 11:56:05 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Thu, 10 Jul 2008 17:26:05 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17949@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794A@in-ex004.groupinfra.com> And While Running this on Client,it Says: find /sys/devices/platform/host* -name "block*" /sys/devices/platform/host1/target1:0:0/1:0:0:0/block ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 5:23 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I tried running this on Client Machine. #tail -f /var/log/messages It Says: Jul 9 12:42:40 BL02DL385 kernel: sda : very big device. try to use READ CAPACITY(16). Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr sectors (0 MB) Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: drive cache: write back Jul 9 12:42:40 BL02DL385 kernel: Attached scsi disk sda at scsi1, channel 0, id 0, lun 0 Jul 9 12:42:40 BL02DL385 scsi.agent[28387]: disk at /devices/platform/host1/target1:0:0/1:0:0:0 Jul 9 12:44:31 BL02DL385 kernel: sda : very big device. try to use READ CAPACITY(16). Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr sectors (0 MB) Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: drive cache: write back ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:53 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Few issue Still prevailing: As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not seeing any shared on both the cluster nodes. I guess I missed few step: The DOC at end says: Voila! you should now have a new SCSI disc avaiable for use. Now you can use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format the partition (which is out of the scope of this howto). Do I need to do fdisk? Pls Help. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From swhiteho at redhat.com Thu Jul 10 13:27:14 2008 From: swhiteho at redhat.com (Steven Whitehouse) Date: Thu, 10 Jul 2008 14:27:14 +0100 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080709163222.GF5780@fieldses.org> References: <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> Message-ID: <1215696434.4011.161.camel@quoit> Hi, On Wed, 2008-07-09 at 12:32 -0400, J. Bruce Fields wrote: > On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote: > > J. Bruce Fields wrote: > >> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: > >>> Steven Whitehouse wrote: > >>>> Hi, > >>>> > >>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: > >>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: > >>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > >>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > >>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); > >>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); > >>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. > >>>>>>> > >>>>>>> > >>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > >>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > >>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't > >>>>>>>> getting some dlm reply it expects? > >>>>>>> dlm inter-node communication is not working here for some reason. There > >>>>>>> must be something unusual with the way the network is configured on the > >>>>>>> nodes, and/or a problem with the way the cluster code is applying the > >>>>>>> network config to the dlm. > >>>>>>> > >>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing > >>>>>>> when a network interface has multiple IP addresses, and/or routing is > >>>>>>> configured strangely. Others cc'ed could offer better details on exactly > >>>>>>> what to look for. > >>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on > >>>>>> neither, and it's entirely likely there's some obvious misconfiguration. > >>>>>> On the kvm host there are 4 virtual interfaces bridged together: > >>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was > >>>>> the second machine opened a tcp connection to port 21064 on the first > >>>>> (which had already completed the mount), and sent it a single message > >>>>> identified by wireshark as "DLM3" protocol, type recovery command: > >>>>> status command. It got back an ACK then a RST. > >>>>> > >>>>> Then the same happened in the other direction, with the first machine > >>>>> sending a similar message to port 21064 on the second, which then reset > >>>>> the connection. > >>>>> > >>> That's a symptom of the "connect from non-cluster node" error in the > >>> DLM. > >> > >> I think I am getting a message to that affect in my logs. > >> > >>> It's got a connection from an IP address that is not known to cman. > >>> So it closes it as a spoofer > >> > >> OK. Is there an easy way to see the list of ip addresses known to cman? > > > > yes, > > > > cman_tool nodes -a > > > > will show you all the nodes and their known IP addresses > > piglet2:~# cman_tool nodes -a > Node Sts Inc Joined Name > 1 M 376 2008-07-09 12:30:32 piglet1 > Addresses: 192.168.122.129 > 2 M 368 2008-07-09 12:30:31 piglet2 > Addresses: 192.168.122.130 > 3 M 380 2008-07-09 12:30:33 piglet3 > Addresses: 192.168.122.131 > 4 M 372 2008-07-09 12:30:31 piglet4 > Addresses: 192.168.122.132 > > These addresses are correct (and are the same addresses that show up in the > packet trace). > > I must be overlooking something very obvious.... > > --b. > There is something v. odd in the packet trace you sent: 16:31:25.513487 00:16:3e:2a:e6:4b (oui Unknown) > 00:16:3e:16:4d:61 (oui Unknown ), ethertype IPv4 (0x0800), length 74: 192.168.122.130.41170 > 192.168.122.129.2 1064: S 1424458172:1424458172(0) win 5840 here we have a packet from .130 (00:16:3e:2a:e6:4b) to .129 (00:16:3e:16:4d:61) but next we see: 16:31:25.513880 00:ff:1d:e9:b9:a3 (oui Unknown) > 00:16:3e:2a:e6:4b (oui Unknown ), ethertype IPv4 (0x0800), length 74: 192.168.122.129.21064 > 192.168.122.130.4 1170: S 1340956343:1340956343(0) ack 1424458173 win 5792 a packet thats supposedly from .129 except that its mac address is now 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured on two different nodes, either that or there is something odd going on with bridging. If that still doesn't help you solve the problem, can you do a: /sbin/ip addr list /sbin/ip route list /sbin/ip neigh list on each node and the "host" after an failed attempt so that we can try and match up the mac addresses with the interfaces in the trace? I don't think that we are too far away from a solution now, Steve. From pariviere at ippon.fr Thu Jul 10 14:02:39 2008 From: pariviere at ippon.fr (Pierre-Alain RIVIERE) Date: Thu, 10 Jul 2008 16:02:39 +0200 Subject: [Linux-cluster] Recovery disaster Message-ID: <1215698559.18002.74.camel@t61> Hello everyone, We're using Xen for about a year in my organization and I want to make profit from summer to improve our infrastructure. First step : plan a full recovery disaster procedure. It's not only related to Xen (only a few actually) so I've allowed myself to post on both Xen and Linux cluster lists. My infrastructure is built as followed : - One software SAN built with Openfiler (http://openfiler.com) : big disks, RAID 5E, redundancy on power supply, network, cpu and RAM. - N Xen Dom0 (actually 3) - The same iSCSI volume is mounted on each Dom0 and we're using CLVM on it. A PV equals a DomU disk. It works pretty well and now I would like to rebuilt my SAN as quickly as possible in case of problem (big hardware failure on the SAN). Here how all these stuffs work together : |---------Openfiler------| |-----?----Dom0---------| PV -> VG -> LV -> iSCSI -> network -> PV -> VG -> LV->Xen VDB PV : physical volume VG : volume group LV : logical volume -------------------------------------------------------------- ?- We use the LVM layer (DomO side) on top of another LVM layer (SAN side) and ?performance are good until now. Do you know some caveats about this usage? Is there's any reason for me to switch to a network aware filesystem? - Can I dd a snapshot of the iSCSI volume on the Openfiler box, send it to a tape driver and expect a dd back to a identical LV to work? - Same question if 2 or more LVs on the Openfiler box are aggregated together with CLVM (and though iSCSI) on the Dom0 side. Thanks Regards From ssingh at amnh.org Thu Jul 10 15:47:48 2008 From: ssingh at amnh.org (Sajesh Singh) Date: Thu, 10 Jul 2008 11:47:48 -0400 Subject: [Linux-cluster] Change quorum disk Message-ID: <48762F24.2000102@amnh.org> Is it possible to change the quorum disk while the cluster is active. I would like to change the device that qdiskd is using without having to cycle the cluster. Is it possible to modify the cluster.conf on each node with the new quorum disk and restart qdiskd so that the new device is used? Regards and TIA, Sajesh Singh From andrew at ntsg.umt.edu Thu Jul 10 16:54:41 2008 From: andrew at ntsg.umt.edu (Andrew A. Neuschwander) Date: Thu, 10 Jul 2008 10:54:41 -0600 (MDT) Subject: [Linux-cluster] Updated to 5.2, new gfs/locking messages Message-ID: <51672.10.8.105.69.1215708881.squirrel@secure.ntsg.umt.edu> I finally updated all my gfs cluster nodes to 5.2, when I updated the one node that serves NFS, I started getting these in /var/log/messages: gfs_controld[5079]: plock result write err 0 errno 2 kernel: lockd: grant for unknown block kernel: gfs2 lock granted after lock request failed; dangling lock! gfs_controld[5079]: plock result write err -1 errno 2 gfs_controld[5079]: plock result write err 0 errno 2 The "plock result write err" messages occur frequently. This is a centos 5.2 node serving nfs from a gfs filesystem. The nfs client that seems to generate these errors is a fedora 9 nfs3 client, but that's just a guess. I can't find much about these messages via google. How serious are these messages? Thanks, -A -- Andrew A. Neuschwander, RHCE Linux Systems/Software Engineer College of Forestry and Conservation The University of Montana http://www.ntsg.umt.edu andrew at ntsg.umt.edu - 406.243.6310 From bfilipek at crscold.com Thu Jul 10 17:41:59 2008 From: bfilipek at crscold.com (Brad Filipek) Date: Thu, 10 Jul 2008 12:41:59 -0500 Subject: [Linux-cluster] Basic 2 node NFS cluster setup help References: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com> Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D024779@SRVEDI.upark.crscold.com> Anybody running a 2 node NFS setup like this? Brad -----Original Message----- From: linux-cluster-bounces at redhat.com on behalf of Brad Filipek Sent: Wed 7/9/2008 8:51 AM To: linux-cluster at redhat.com Subject: [Linux-cluster] Basic 2 node NFS cluster setup help I am a little unsure on how to properly setup an NFS export on my 2 node cluster. I have 1 service in cluster manager called "cluster" and 4 resources: 1) Virtual IP of 172.25.7.10 (which binds to eth0) 2) Virtual IP of 172.25.8.10 (which binds to eth1) 3) ext3 file system mount at /SAN/LogVol2 called "data" 4) ext3 file system mount at /SAN/LogVol3 called "shared" When I start the cluster services using just these 4 resources assiged to my one service called "cluster", everything starts up and works fine. What I need to do is assign 3 NFS exports: /SAN/LogVol3/files webserver(ro,sync) /SAN/LogVol3/webup webserver(rw,sync) /SAN/LogVol2/webdown webserver(ro,sync) Do I need to create 3 new "NFS Export" resources for these? When I select the "NFS Export" option within cluster suite, I only have one field to fill in - Name. It does not let me select the path that I want to export and which options to allow such as the host, ro or rw, etc. I am just trying to make the above exports available on my cluster's virtual IP of 172.25.7.10 instead of setting it up on each of the two nodes and manually starting the NFS service on whichever node is active in the cluster. Do I still need to create an /etc/exports file with all 3 of these entries on each node? Or is there a config file somewhere else? I read the NFS cookbook but it explains how to setup NFS using multiple services (I only have one service) with active/active GFS (I am using EXT3 in active/passive). Thanks in advance for any help. Brad Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhh at redhat.com Thu Jul 10 20:42:50 2008 From: lhh at redhat.com (Lon Hohberger) Date: Thu, 10 Jul 2008 16:42:50 -0400 Subject: [Linux-cluster] Basic 2 node NFS cluster setup help In-Reply-To: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com> References: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com> Message-ID: <1215722570.22185.29.camel@localhost.localdomain> On Wed, 2008-07-09 at 08:51 -0500, Brad Filipek wrote: > I am a little unsure on how to properly setup an NFS export on my 2 > node cluster. I have 1 service in cluster manager called "cluster" and > 4 resources: > > 1) Virtual IP of 172.25.7.10 (which binds to eth0) > 2) Virtual IP of 172.25.8.10 (which binds to eth1) > 3) ext3 file system mount at /SAN/LogVol2 called "data" > 4) ext3 file system mount at /SAN/LogVol3 called "shared" > > When I start the cluster services using just these 4 resources assiged > to my one service called "cluster", everything starts up and works > fine. > > What I need to do is assign 3 NFS exports: > /SAN/LogVol3/files webserver(ro,sync) > /SAN/LogVol3/webup webserver(rw,sync) > /SAN/LogVol2/webdown webserver(ro,sync) > > Do I need to create 3 new "NFS Export" resources for these? When I > select the "NFS Export" option within cluster suite, I only have one > field to fill in - Name. It does not let me select the path that I > want to export and which options to allow such as the host, ro or rw, > etc. I am just trying to make the above exports available on my > cluster's virtual IP of 172.25.7.10 instead of setting it up on each > of the two nodes and manually starting the NFS service on whichever > node is active in the cluster. Do I still need to create > an /etc/exports file with all 3 of these entries on each node? Or is > there a config file somewhere else? I read the NFS cookbook but it > explains how to setup NFS using multiple services (I only have one > service) with active/active GFS (I am using EXT3 in active/passive). Typically, you add an NFSexport (which is mostly a placeholder). Below that, you attach nfsclients - which are actual hosts. -- Lon From ajeet.singh.raina at logica.com Fri Jul 11 06:14:41 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Fri, 11 Jul 2008 11:44:41 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com> Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sayed.Mujtaba at in.unisys.com Fri Jul 11 06:55:34 2008 From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed) Date: Fri, 11 Jul 2008 12:25:34 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com> <0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com> Message-ID: Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Fri Jul 11 07:02:34 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Fri, 11 Jul 2008 12:32:34 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com> Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sayed.Mujtaba at in.unisys.com Fri Jul 11 09:51:07 2008 From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed) Date: Fri, 11 Jul 2008 15:21:07 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com> Message-ID: To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Fri Jul 11 10:13:56 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Fri, 11 Jul 2008 15:43:56 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com> Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sayed.Mujtaba at in.unisys.com Fri Jul 11 10:24:39 2008 From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed) Date: Fri, 11 Jul 2008 15:54:39 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com> Message-ID: When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Fri Jul 11 10:31:11 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Fri, 11 Jul 2008 16:01:11 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com> I rebooted all the machine and this time it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1 2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one initiator it don't get created on another.Why So? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:55 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sayed.Mujtaba at in.unisys.com Fri Jul 11 10:37:54 2008 From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed) Date: Fri, 11 Jul 2008 16:07:54 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com> Message-ID: You are getting login to same iscsi server(ip address) using iscsi commands so both are connected to same shared storage ... Just mount from one node and create some files on it ...unmount from that node and mount it from other node and see if created files from first node are visible or no ... ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 4:01 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I rebooted all the machine and this time it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1 2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one initiator it don't get created on another.Why So? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:55 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Fri Jul 11 11:06:42 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Fri, 11 Jul 2008 16:36:42 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com> Brilliant ...Its Worked. I think GFS will enable us to see the files instantly on both the Cluster Nodes. Any Doc related to "Setting Up GFS"? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 4:08 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. You are getting login to same iscsi server(ip address) using iscsi commands so both are connected to same shared storage ... Just mount from one node and create some files on it ...unmount from that node and mount it from other node and see if created files from first node are visible or no ... ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 4:01 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I rebooted all the machine and this time it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1 2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one initiator it don't get created on another.Why So? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:55 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ..Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pedroche5 at gmail.com Fri Jul 11 11:15:36 2008 From: pedroche5 at gmail.com (Pedro Gonzalez Zamora) Date: Fri, 11 Jul 2008 13:15:36 +0200 Subject: [Linux-cluster] CMAN configuration Message-ID: <47311dd20807110415h14c5eaf3o48ca67c7a3f2e44c@mail.gmail.com> Hi all, I trying to configure a cluster but I have some problems that I don't understand. Jul 11 12:34:31 dib1-s1 ccsd[6329]: Initial status:: Inquorate Jul 11 12:34:31 dib1-s1 kernel: CMAN: sending membership request Jul 11 12:34:31 dib1-s1 kernel: CMAN: Cluster membership rejected Jul 11 12:34:31 dib1-s1 ccsd[6329]: Cluster manager shutdown. Attemping to reconnect... Jul 11 12:34:31 dib1-s1 kernel: CMAN: Waiting to join or form a Linux-cluster Jul 11 12:34:31 dib1-s1 cman: Timed-out waiting for cluster failed Jul 11 12:34:32 dib1-s1 ccsd[6329]: Connected to cluster infrastruture via: CMAN/SM Plugin v1.1.7.1 Jul 11 12:34:32 dib1-s1 ccsd[6329]: Initial status:: Inquorate Jul 11 12:34:35 dib1-s1 kernel: CMAN: sending membership request Jul 11 12:34:35 dib1-s1 kernel: CMAN: Cluster membership rejected Jul 11 12:34:35 dib1-s1 ccsd[6329]: Cluster manager shutdown. Attemping to reconnect... Jul 11 12:35:03 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78840 seconds. Jul 11 12:35:33 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78870 seconds. Jul 11 12:36:03 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78900 seconds. Jul 11 12:36:33 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78930 seconds. Jul 11 12:37:03 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78960 seconds. Jul 11 12:37:33 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 78990 seconds. Jul 11 12:38:03 dib1-s1 ccsd[6329]: Unable to connect to cluster infrastructure after 79020 seconds Best Regards Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From fog at t.is Fri Jul 11 14:27:02 2008 From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=) Date: Fri, 11 Jul 2008 14:27:02 -0000 Subject: [Linux-cluster] Monitoring services with Nagios Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is> Hi, I was planning on monitoring the status of a service from clustat (run clustat, grab the output). And as i am running a x86_64 system i can not seem to load the correct lib for snmpd to be able to read any data from it: nmpd[30150]: dlopen failed: /usr/lib64/cluster-snmp/libClusterMonitorSnmp.so: undefined symbol: _ZN17ClusterMonitoring7Cluster15runningServicesEv How do you monitor your cluster with Nagios/Other open source solutions ? (What scripts do you use etc). K?r kve?ja / Best Regards, Finnur ?rn Gu?mundsson Network Engineer - Network Operations fog at t.is TM Software Ur?arhvarf 6, IS-203 K?pavogur, Iceland Tel: +354 545 3000 - fax +354 545 3610 www.tm-software.is This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From stpierre at NebrWesleyan.edu Fri Jul 11 14:55:26 2008 From: stpierre at NebrWesleyan.edu (Chris St. Pierre) Date: Fri, 11 Jul 2008 09:55:26 -0500 (CDT) Subject: [Linux-cluster] Monitoring services with Nagios In-Reply-To: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is> References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is> Message-ID: I've attached my (very basic) check_rhcs script that I use with Nagios. HTH. Chris St. Pierre Unix Systems Administrator Nebraska Wesleyan University On Fri, 11 Jul 2008, Finnur ?rn Gu?mundsson - TM Software wrote: > Hi, > > > > I was planning on monitoring the status of a service from clustat (run clustat, grab the output). > > And as i am running a x86_64 system i can not seem to load the correct lib for snmpd to be able to read any data from it: > > nmpd[30150]: dlopen failed: /usr/lib64/cluster-snmp/libClusterMonitorSnmp.so: undefined symbol: _ZN17ClusterMonitoring7Cluster15runningServicesEv > > > > How do you monitor your cluster with Nagios/Other open source solutions ? (What scripts do you use etc). > > > > K?r kve?ja / Best Regards, > > Finnur ?rn Gu?mundsson > Network Engineer - Network Operations > fog at t.is > > TM Software > Ur?arhvarf 6, IS-203 K?pavogur, Iceland > Tel: +354 545 3000 - fax +354 545 3610 > www.tm-software.is > > This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer > > -------------- next part -------------- #! /usr/bin/perl -w # # $Id: check_rhcs 11710 2008-06-25 19:50:44Z stpierre $ # # check_rhcs # # Nagios host script to check a Redhat Cluster Suite cluster require 5.004; use strict; use lib qw(/usr/lib/nagios/plugins /usr/lib64/nagios/plugins /usr/local/nagios/libexec); use utils qw($TIMEOUT %ERRORS &print_revision &support &usage); use XML::Simple; sub cleanup($$); my $PROGNAME = "check_rhcs"; my $clustat = "/usr/sbin/clustat"; if (!-e $clustat) { cleanup("UNKNOWN", "$clustat not found"); } elsif (!-x $clustat) { cleanup("UNKNOWN", "$clustat not executable"); } # Just in case of problems, let's not hang Nagios $SIG{'ALRM'} = sub { cleanup("UNKNOWN", "clustat timed out"); }; alarm($TIMEOUT); my $output = `$clustat -x`; my $retval = $?; # Turn off alarm alarm(0); if ($output =~ /cman is not running/) { cleanup("CRITICAL", $output); } else { my $status = XMLin($output, ForceArray => ['group']); # check quorum if (!$status->{'quorum'}->{'quorate'}) { cleanup("CRITICAL", "Cluster is not quorate"); } # check nodes my %nodes = %{$status->{'nodes'}->{'node'}}; foreach my $node (keys(%nodes)) { if (!$nodes{$node}->{'state'}) { cleanup("WARNING", "Node $node is down"); } elsif (!$nodes{$node}->{'rgmanager'}) { cleanup("WARNING", "rgmanager is not running on node $node"); } } # check services my %svcs = %{$status->{'groups'}->{'group'}}; foreach my $svc (keys(%svcs)) { if ($svcs{$svc}->{'state_str'} ne 'started') { cleanup("CRITICAL", "$svc is in state " . $svcs{$svc}->{'state_str'}); } } # check return value if ($retval) { cleanup("UNKNOWN", "Cluster appeared okay, but clustat returned $retval"); } } cleanup("OK", "Cluster is sound"); ############################## # Subroutines start here # ############################## sub cleanup ($$) { my ($state, $answer) = @_; print "Cluster $state: $answer\n"; exit $ERRORS{$state}; } From lhh at redhat.com Fri Jul 11 19:26:18 2008 From: lhh at redhat.com (Lon Hohberger) Date: Fri, 11 Jul 2008 15:26:18 -0400 Subject: [Linux-cluster] Monitoring services with Nagios In-Reply-To: References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is> Message-ID: <1215804378.27354.22.camel@localhost.localdomain> On Fri, 2008-07-11 at 09:55 -0500, Chris St. Pierre wrote: > I've attached my (very basic) check_rhcs script that I use with > Nagios. HTH. You should use clustat -fx (f = fast / lockless) -- Lon From sghosh at redhat.com Fri Jul 11 20:35:05 2008 From: sghosh at redhat.com (Subhendu Ghosh) Date: Fri, 11 Jul 2008 16:35:05 -0400 Subject: [Linux-cluster] Monitoring services with Nagios In-Reply-To: <1215804378.27354.22.camel@localhost.localdomain> References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is> <1215804378.27354.22.camel@localhost.localdomain> Message-ID: <4877C3F9.8060108@redhat.com> Lon Hohberger wrote: > On Fri, 2008-07-11 at 09:55 -0500, Chris St. Pierre wrote: >> I've attached my (very basic) check_rhcs script that I use with >> Nagios. HTH. > > You should use clustat -fx > > (f = fast / lockless) > > -- Lon Is there any interest in submitting the script for the standard plugins (GPLv3)? Happy to help get in :) -- - regards Subhendu Ghosh From bfields at fieldses.org Fri Jul 11 22:35:39 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Fri, 11 Jul 2008 18:35:39 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <4875D5DE.7030601@redhat.com> References: <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> <4875D5DE.7030601@redhat.com> Message-ID: <20080711223539.GG23069@fieldses.org> On Thu, Jul 10, 2008 at 10:26:54AM +0100, Christine Caulfield wrote: > J. Bruce Fields wrote: >> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote: >>> J. Bruce Fields wrote: >>>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: >>>>> Steven Whitehouse wrote: >>>>>> Hi, >>>>>> >>>>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: >>>>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: >>>>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: >>>>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: >>>>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); >>>>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); >>>>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. >>>>>>>>> >>>>>>>>> >>>>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node >>>>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is >>>>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't >>>>>>>>>> getting some dlm reply it expects? >>>>>>>>> dlm inter-node communication is not working here for some reason. There >>>>>>>>> must be something unusual with the way the network is configured on the >>>>>>>>> nodes, and/or a problem with the way the cluster code is applying the >>>>>>>>> network config to the dlm. >>>>>>>>> >>>>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing >>>>>>>>> when a network interface has multiple IP addresses, and/or routing is >>>>>>>>> configured strangely. Others cc'ed could offer better details on exactly >>>>>>>>> what to look for. >>>>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on >>>>>>>> neither, and it's entirely likely there's some obvious misconfiguration. >>>>>>>> On the kvm host there are 4 virtual interfaces bridged together: >>>>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was >>>>>>> the second machine opened a tcp connection to port 21064 on the first >>>>>>> (which had already completed the mount), and sent it a single message >>>>>>> identified by wireshark as "DLM3" protocol, type recovery command: >>>>>>> status command. It got back an ACK then a RST. >>>>>>> >>>>>>> Then the same happened in the other direction, with the first machine >>>>>>> sending a similar message to port 21064 on the second, which then reset >>>>>>> the connection. >>>>>>> >>>>> That's a symptom of the "connect from non-cluster node" error in >>>>> the DLM. >>>> I think I am getting a message to that affect in my logs. >>>> >>>>> It's got a connection from an IP address that is not known to >>>>> cman. So it closes it as a spoofer >>>> OK. Is there an easy way to see the list of ip addresses known to cman? >>> yes, >>> >>> cman_tool nodes -a >>> >>> will show you all the nodes and their known IP addresses >> >> piglet2:~# cman_tool nodes -a >> Node Sts Inc Joined Name >> 1 M 376 2008-07-09 12:30:32 piglet1 >> Addresses: 192.168.122.129 2 M 368 2008-07-09 12:30:31 >> piglet2 >> Addresses: 192.168.122.130 3 M 380 2008-07-09 12:30:33 >> piglet3 >> Addresses: 192.168.122.131 4 M 372 2008-07-09 12:30:31 >> piglet4 >> Addresses: 192.168.122.132 >> >> These addresses are correct (and are the same addresses that show up in the >> packet trace). >> >> I must be overlooking something very obvious.... > > Hmm, very odd. > > Are those IP addresses consistent across all nodes in the cluster ? Yes, "cman_tool nodes -a" gives the same IP addresses no matter which of the four cluster nodes it's run on. --b. From bfields at fieldses.org Fri Jul 11 23:25:29 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Fri, 11 Jul 2008 19:25:29 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <1215696434.4011.161.camel@quoit> References: <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> <1215696434.4011.161.camel@quoit> Message-ID: <20080711232529.GH23069@fieldses.org> On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote: > Hi, > > On Wed, 2008-07-09 at 12:32 -0400, J. Bruce Fields wrote: > > On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote: > > > J. Bruce Fields wrote: > > >> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote: > > >>> Steven Whitehouse wrote: > > >>>> Hi, > > >>>> > > >>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote: > > >>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote: > > >>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote: > > >>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote: > > >>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info)); > > >>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info)); > > >>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing. > > >>>>>>> > > >>>>>>> > > >>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node > > >>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is > > >>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't > > >>>>>>>> getting some dlm reply it expects? > > >>>>>>> dlm inter-node communication is not working here for some reason. There > > >>>>>>> must be something unusual with the way the network is configured on the > > >>>>>>> nodes, and/or a problem with the way the cluster code is applying the > > >>>>>>> network config to the dlm. > > >>>>>>> > > >>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing > > >>>>>>> when a network interface has multiple IP addresses, and/or routing is > > >>>>>>> configured strangely. Others cc'ed could offer better details on exactly > > >>>>>>> what to look for. > > >>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on > > >>>>>> neither, and it's entirely likely there's some obvious misconfiguration. > > >>>>>> On the kvm host there are 4 virtual interfaces bridged together: > > >>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was > > >>>>> the second machine opened a tcp connection to port 21064 on the first > > >>>>> (which had already completed the mount), and sent it a single message > > >>>>> identified by wireshark as "DLM3" protocol, type recovery command: > > >>>>> status command. It got back an ACK then a RST. > > >>>>> > > >>>>> Then the same happened in the other direction, with the first machine > > >>>>> sending a similar message to port 21064 on the second, which then reset > > >>>>> the connection. > > >>>>> > > >>> That's a symptom of the "connect from non-cluster node" error in the > > >>> DLM. > > >> > > >> I think I am getting a message to that affect in my logs. > > >> > > >>> It's got a connection from an IP address that is not known to cman. > > >>> So it closes it as a spoofer > > >> > > >> OK. Is there an easy way to see the list of ip addresses known to cman? > > > > > > yes, > > > > > > cman_tool nodes -a > > > > > > will show you all the nodes and their known IP addresses > > > > piglet2:~# cman_tool nodes -a > > Node Sts Inc Joined Name > > 1 M 376 2008-07-09 12:30:32 piglet1 > > Addresses: 192.168.122.129 > > 2 M 368 2008-07-09 12:30:31 piglet2 > > Addresses: 192.168.122.130 > > 3 M 380 2008-07-09 12:30:33 piglet3 > > Addresses: 192.168.122.131 > > 4 M 372 2008-07-09 12:30:31 piglet4 > > Addresses: 192.168.122.132 > > > > These addresses are correct (and are the same addresses that show up in the > > packet trace). > > > > I must be overlooking something very obvious.... > > > > --b. > > > There is something v. odd in the packet trace you sent: > > 16:31:25.513487 00:16:3e:2a:e6:4b (oui Unknown) > 00:16:3e:16:4d:61 (oui > Unknown > ), ethertype IPv4 (0x0800), length 74: 192.168.122.130.41170 > > 192.168.122.129.2 > 1064: S 1424458172:1424458172(0) win 5840 140931 0,no > p,wscale 4> > > here we have a packet from .130 (00:16:3e:2a:e6:4b) to .129 > (00:16:3e:16:4d:61) but next we see: > > 16:31:25.513880 00:ff:1d:e9:b9:a3 (oui Unknown) > 00:16:3e:2a:e6:4b (oui > Unknown > ), ethertype IPv4 (0x0800), length 74: 192.168.122.129.21064 > > 192.168.122.130.4 > 1170: S 1340956343:1340956343(0) ack 1424458173 win 5792 1460,sackOK,timest > amp 140842 140931,nop,wscale 4> > > a packet thats supposedly from .129 except that its mac address is now > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured > on two different nodes, either that or there is something odd going on > with bridging. Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0 is the bridge, which has ip .1 on the host, and which is also the interface that wireshark is being run on. The other two addresses are the mac addresses of the (virtual) ethernet interfaces inside the two kvm's, with ip's .129 and .130 respectively. So .130 is sending to the expected mac address for .129, but responses from .130 are getting the mac address of vnet0/vnet4. I'm running wireshark on the host on vnet0. Just out of curiosity, I ran it on the host on vnet1 instead, and this time saw the first DLM connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll experiment some more and look at the /sbin/ip output. --b. > If that still doesn't help you solve the problem, can you > do a: > > /sbin/ip addr list > /sbin/ip route list > /sbin/ip neigh list > > on each node and the "host" after an failed attempt so that we can try > and match up the mac addresses with the interfaces in the trace? > > I don't think that we are too far away from a solution now, From bfields at fieldses.org Sat Jul 12 03:33:08 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Fri, 11 Jul 2008 23:33:08 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080711232529.GH23069@fieldses.org> References: <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> <1215696434.4011.161.camel@quoit> <20080711232529.GH23069@fieldses.org> Message-ID: <20080712033308.GA29498@fieldses.org> On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote: > On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote: > > a packet thats supposedly from .129 except that its mac address is now > > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured > > on two different nodes, either that or there is something odd going on > > with bridging. > > Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0 > is the bridge, which has ip .1 on the host, and which is also the > interface that wireshark is being run on. The other two addresses are > the mac addresses of the (virtual) ethernet interfaces inside the two > kvm's, with ip's .129 and .130 respectively. So .130 is sending to the > expected mac address for .129, but responses from .130 are getting the > mac address of vnet0/vnet4. > > I'm running wireshark on the host on vnet0. Just out of curiosity, I > ran it on the host on vnet1 instead, and this time saw the first DLM > connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll > experiment some more and look at the /sbin/ip output. Bah, yes, I clearly got the network configuration completely screwed up at some point--it must be trying to do some kind of NAT, though that clearly makes no sense. I'll get this untangled and then I think it should be OK.... --b. From theophanis_kontogiannis at yahoo.gr Sun Jul 13 00:14:39 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Sun, 13 Jul 2008 03:14:39 +0300 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com> Message-ID: <00a801c8e47d$72d6e4f0$5884aed0$@gr> Hello, Yes for instant access from all nodes to the file, you need a cluster aware file system like GFS (or GFS2 - still in experimental stage). You can try the following links: http://www.redhat.com/docs/manuals/csgfs/ (under GFS section) http://gfs.wikidev.net/Main_Page BR Theophanis Kontogiannis From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 2:07 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Brilliant .Its Worked. I think GFS will enable us to see the files instantly on both the Cluster Nodes. Any Doc related to "Setting Up GFS"? Pls Help _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 4:08 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. You are getting login to same iscsi server(ip address) using iscsi commands so both are connected to same shared storage . Just mount from one node and create some files on it .unmount from that node and mount it from other node and see if created files from first node are visible or no . _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 4:01 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I rebooted all the machine and this time it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1 2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one initiator it don't get created on another.Why So? _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:55 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ...Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls **************************************************************************** *** SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007) **************************************************************************** *** TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 **************************************************************************** *** [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.r pm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. _____ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirk.schulz at kinzesberg.de Sun Jul 13 17:23:50 2008 From: dirk.schulz at kinzesberg.de (Dirk H. Schulz) Date: Sun, 13 Jul 2008 19:23:50 +0200 Subject: [Linux-cluster] cluster service not running any more Message-ID: <421FFAB7307706E7651DAD7C@file.wkd-druck.org> Hi folks, I have setup a cluster on 5.2 with system-config-cluster. It is quite simple: the only service is an ip ressource that is switched. The cluster has started up fine the first time, the virtual ip was where ist belonged. Since then I have not changed anything, I simply had to restart the machines for other reasons. Now nothing works as it should: - shutting down clurgmgrd normally (service rgmanager stop) is impossible; even kill -9 does not work. I have to call "reboot" twice to force a reboot to stop clurgmgrd. - after reboot I can manually start the cluster again (did not venture to do it with system startup), the daemons start, nothing unusual is logged, but a) the service containing the ip ressource is not started b) clustat on the primary node moans a "timed out trying to connect to Ressource Group Manager" c) clustat on both nodes shows the node state, but does not list the service I have tried everything to get the environement clean (shutdown the firewall, set selinux to permissive, etc.), but the result is always the same. Since I did not change anything after the first successfull start of the cluster, I wonder - if there is some run time data/temporary files the ressource group manager writes to disk and tries to reread after reboot (remember, I had to kill it by violent force to be able to reboot my machines) - if it is possible at all to successfully run a cluster with cman and clurgmgrd. In case it helps here is my cluster.conf: The logs show the nodes successfully joining the cluster and such stuff and as last clurgmgrd starting, then nothing more from cluster daemons. Any hint or help is appreciated. I am stuck and do not know where to look at. Dirk From bfields at fieldses.org Sun Jul 13 20:20:16 2008 From: bfields at fieldses.org (J. Bruce Fields) Date: Sun, 13 Jul 2008 16:20:16 -0400 Subject: [Linux-cluster] gfs2, kvm setup In-Reply-To: <20080712033308.GA29498@fieldses.org> References: <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org> <1215696434.4011.161.camel@quoit> <20080711232529.GH23069@fieldses.org> <20080712033308.GA29498@fieldses.org> Message-ID: <20080713202016.GA2810@fieldses.org> On Fri, Jul 11, 2008 at 11:33:08PM -0400, bfields wrote: > On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote: > > On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote: > > > a packet thats supposedly from .129 except that its mac address is now > > > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured > > > on two different nodes, either that or there is something odd going on > > > with bridging. > > > > Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0 > > is the bridge, which has ip .1 on the host, and which is also the > > interface that wireshark is being run on. The other two addresses are > > the mac addresses of the (virtual) ethernet interfaces inside the two > > kvm's, with ip's .129 and .130 respectively. So .130 is sending to the > > expected mac address for .129, but responses from .130 are getting the > > mac address of vnet0/vnet4. > > > > I'm running wireshark on the host on vnet0. Just out of curiosity, I > > ran it on the host on vnet1 instead, and this time saw the first DLM > > connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll > > experiment some more and look at the /sbin/ip output. > > Bah, yes, I clearly got the network configuration completely screwed up > at some point--it must be trying to do some kind of NAT, though that > clearly makes no sense. I'll get this untangled and then I think it > should be OK.... Problem found. So the network configuration that libvirt sets up has 4 interfaces (one for each of the 4 kvm guests) all bridged together on the host, with NAT setup to give the guests access to the outside world. That looks like this: root at pig:~# iptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 192.168.122.0/24 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination OK, fine, except that packets exchanged between the hosts on the bridge also seem to be going through that POSTROUTING chain, so tcp connectsions between the guests work--sort of--but they're all getting NAT'd so they appear to come from 192.168.122.1, so the dlm complains about connection from a non cluster host". So my gfs2 mount finally succeeds after: root at pig:~# iptables -t nat -I POSTROUTING -s 192.168.122.0/24 -d 192.168.122.0/24 -j ACCEPT ptables -t nat -L -n Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.122.0/24 192.168.122.0/24 MASQUERADE all -- 192.168.122.0/24 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination I don't know if that's the right fix. In any case, the original behavior certainly looks to me like a bug in libvirt. Thanks for your patience! I should have caught that much sooner.... --b. From ajeet.singh.raina at logica.com Mon Jul 14 04:31:45 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 10:01:45 +0530 Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. In-Reply-To: <00a801c8e47d$72d6e4f0$5884aed0$@gr> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1795D@in-ex004.groupinfra.com> I have gone through these but no docs says anything for installing GFS on iSCSi based Storage Setup. Since I have no shared Storage but rather have iSCSI Kindda Configuration. I request You to just provide me some hint/doc which will be helpful for quick setup for testing purpose. Ajeet ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Theophanis Kontogiannis Sent: Sunday, July 13, 2008 5:45 AM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hello, Yes for instant access from all nodes to the file, you need a cluster aware file system like GFS (or GFS2 - still in experimental stage). You can try the following links: http://www.redhat.com/docs/manuals/csgfs/ (under GFS section) http://gfs.wikidev.net/Main_Page BR Theophanis Kontogiannis From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 2:07 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Brilliant ...Its Worked. I think GFS will enable us to see the files instantly on both the Cluster Nodes. Any Doc related to "Setting Up GFS"? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 4:08 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. You are getting login to same iscsi server(ip address) using iscsi commands so both are connected to same shared storage ... Just mount from one node and create some files on it ...unmount from that node and mount it from other node and see if created files from first node are visible or no ... ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 4:01 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I rebooted all the machine and this time it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1 2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one initiator it don't get created on another.Why So? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:55 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. When you mount the file system check using df command if it is really mounted or no .. Why don't you just stop iscsi service on both nodes and restart it again to do clean operation.. Please search in some other forums also where you might get same information available already .(do googling with whatever error messages what you are getting) ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 3:44 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Hai..I have successfully setup iSCSI target and Initiator.I am able to : Create a partition and file system on earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on the next cluster node it showed me: Removing iscsi driver: ERROR: Module iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 3:21 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. To dicover this volume from both nodes, hopefully you are aware of these iscsi commands Just giving examples 1) First discover if these volumes are visible 1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222 (where 10.1.40.222 is IP address of iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login 3)do cat /proc/partitions It should show you /sd ** 4)mount that /dev/sd* to any of cluster [it should allow you to mount from both nodes Just read some iscsi manuals and do this [withought GUI you can do that ...Add new resource basically related to clustering resource which automatically Mount your shared device when cluster manager is started ) So better configure it using iscsi commands and see whether you can mount it from both nodes [then you can add a resource about it] ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 12:33 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Ya,I have now created /newshare directory on the both scsi initiator machine(cluster nodes). I made the following entry thru system-config-cluster: Resource >> Add New Resource >> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other Cluster Nodes. Now What Next? How will I know if the Shared Storage is seen through both the Cluster Nodes? Earlier I had a script called duoscript on both the Cluster Nodes.What I had tested: I ran the script on both the cluster nodes.I stopped few processes on one of node,suddenly other took the responsibility. Now where should I put the script on shared Storage(target)? Pls Help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed Mohammed Sent: Friday, July 11, 2008 12:26 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Re:When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? Create one directory as mount point , Select any file system which you want to create in list ,you can choose default file system ID there .. GUI will do the rest .. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Friday, July 11, 2008 11:45 AM To: linux-cluster at redhat.com Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root at BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 The "Virtual DISk" Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make here: When I click on Resource >> File System on Cluster Tool...It asked for Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry I need to make ? My machine address is 10.14.236.134. Path where Unformatted Partition made is /dev/sda6 As for Now, I have only unformatted partition?Do I need to format it? Pls Help From: Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:33 PM To: 'linux clustering' Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. [root at BL02DL385 ~]# iscsi-ls ************************************************************************ ******* SFNet iSCSI Driver Version ....4:0.1.11-6(03-Aug-2007) ************************************************************************ ******* TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET ALIAS : HOST ID : 0 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.14.236.134:3260,1 SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008 SESSION ID : ISID 00023d000001 TSIH 100 ************************************************************************ ******* [root at BL02DL385 ~]# chkconfig iscsi on [root at BL02DL385 ~]# I guess it worked.Finally ISCSI Setup Done. What is the next Step? Pls help ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:28 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I followed as said in the doc and found it this way: [root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID 9b3c94f4 Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root at BL02DL385 ~]# vi /etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h # and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root at BL02DL385 ~]# service iscsi start Checking iscsi config: [ OK ] Loading iscsi driver: [ OK ] Starting iscsid: [ OK ] [root at BL02DL385 ~]# CD /proc/scsi/scsi -bash: CD: command not found [root at BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model: VIRTUAL-DISK Rev: 0 Type: Direct-Access ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in the other Cluster Node. Is it fine upto this point? What Next? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 4:13 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Great !!! I ran depmod and it ran well now. Thanks for the link anyway. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 3:39 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. This is related to IET. Go through their mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 3:30 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. I am Facing this Issue: [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection refused Jul 10 15:25:24 vjs iscsi-target: ietd startup failed Any idea? I just did the following steps: [root at vjs ~]# mkdir cluster_share [root at vjs ~]# cd cluster_share/ [root at vjs cluster_share]# touch shared [root at vjs cluster_share]# cd [root at vjs ~]# mkdir /usr/src/iscsitarget [root at vjs ~]# cd /usr/src/ debug/ iscsitarget/ kernels/ redhat/ [root at vjs ~]# cd /usr/src/iscsitarget/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/ noarch/ x86_64/ [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_ 64.rpm Preparing... ########################################### [100%] 1:iscsitarget-kernel ########################################### [ 50%] 2:iscsitarget ########################################### [100%] [root at vjs iscsitarget]# chkconfig --add iscsi-target [root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on [root at vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/root/cluster_share,Type=fileio Alias iDISK0 I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt?? [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# hostname vjs [root at vjs iscsitarget]# vi /etc/hosts [root at vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline] [-p pattern] [-s packetsize] [-t ttl] [-I interface or address] [-M mtu discovery hint] [-S sndbuf] [ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root at vjs iscsitarget]# vjs bash: vjs: command not found [root at vjs iscsitarget]# ping vjs PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2 [root at vjs iscsitarget]# ping vjs.logica.com PING vjs.logica.com (10.14.236.134) 56(84) bytes of data. 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2 [root at vjs iscsitarget]# vi /etc/ietd.conf [root at vjs iscsitarget]# service iscsi-target restart Stoping iSCSI target service: [FAILED] Starting iSCSI target service: FATAL: Module iscsi_trgt not found. netlink fd : Connection refused [FAILED] [root at vjs iscsitarget]# [root at vjs iscsitarget]# ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:57 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. So I have the following Entry at my ietd.conf file: # iscsi target configuration Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6 Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats that? I have now not included any incoming or outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion on client side too. The Doc says You need to make Entry on /etc/iscsi.conf file as: # simple iscsi.conf DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28 What's the above entry means?IP?? As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:48 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:42 PM To: linux clustering Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. Shall I need to mention Lun 0 ? is it needed? Yes, of course it's needed ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash Sent: Thursday, July 10, 2008 2:38 PM To: linux clustering Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage.. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Thursday, July 10, 2008 2:22 PM To: linux-cluster at redhat.com Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage.. I want to setup iSCSI as I am running short of Shared Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that : [doc] Install the Target 1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root at vjs ~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9729 78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync' for details /dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 /dev/VolGroup00/LogVol02 /data ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 #/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0 /dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01 swap swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6 In /etc/ietd.conf Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry? If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file] Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdinitto at redhat.com Mon Jul 14 05:38:56 2008 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Mon, 14 Jul 2008 07:38:56 +0200 (CEST) Subject: [Linux-cluster] Cluster 2.03.05 released Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The cluster team and its vibrant community are proud to announce the 6th release from the STABLE2 branch: 2.03.05. The STABLE2 branch collects, on a daily base, all bug fixes and the bare minimal changes required to run the cluster on top of the most recent Linux kernel (2.6.25) and rock solid openais (0.80.3 or higher). The new source tarball can be downloaded here: ftp://sources.redhat.com/pub/cluster/releases/cluster-2.03.05.tar.gz In order to use GFS1, the Linux kernel requires a minimal patch: ftp://sources.redhat.com/pub/cluster/releases/lockproto-exports.patch To report bugs or issues: https://bugzilla.redhat.com/ Would you like to meet the cluster team or members of its community? Join us on IRC (irc.freenode.net #linux-cluster) and share your experience with other sysadministrators or power users. Happy clustering, Fabio Under the hood (from 2.03.04): Benjamin Marzinski (3): gnbd-kernel: Fix receiver race [gnbd-kernel] bz 449812: disallow sending requests after a send has failed. [gnbd-kernel] bz 442606: Switch gnbd to use deadline scheduler by default. Bob Peterson (12): Added an optional block-size to mkfs.gfs2 Fix build warnings in gfs2-utils. Fix another compiler warning for 32-bit arch. Fix build warnings from libgfs Fix gfs_debug build warning Ignoring gets return value in gfs_mkfs Fix gfs_tool build warnings Fix gfs_fsck build warnings Fix 32-bit warning in super.c. 452004: gfs: BUG: unable to handle kernel paging request. savemeta was not saving gfs1 journals properly. gfs2_fsck fails: Unable to read in jindex inode. Christine Caulfield (2): [CMAN] Fix some compiler warnings on 64 bit systems [CMAN] Only do timestamp check for older nodes. Fabio M. Di Nitto (18): [QDISK] Add better support for Xen virtual block devices [CCS] Fix build warnings on sparc [QDISK] Fix debug type [QDISK] get_config_data cleanup [QDISK] Remove duplicate debugging configuration [MISC] Fix build errors with Fedora default build options [MISC] Fix previous cherry pick build failure in stable branch [QDISK] Major clean up [GFS2] hexedit does not need syslog [CCS] Remove duplicate header [BUILD] Allow configuration of docdir [BUILD] Fix docdir default path [MISC] Documentation cleanup [BUILD] Fix install of telnet_ssl [BUILD] Fix telnet_ssl build [BUILD] Add make oldconfig target [BUILD] Add fence_lpar fencing agent to the build system [BUILD] Clean extra kernel modules files James Parsons (1): Fix for 251358 Lon Hohberger (5): Fix #362351 - make fence_xvmd work in no-cluster mode Ancillary NOCLUSTER mode fixes for fence_xvmd Ancillary NOCLUSTER mode fixes for fence_xvmd [rgmanager] Make rgmanager check pbond links correctly [rgmanager] Fix erroneous broadcast matching in ip.sh Marek 'marx' Grac (2): [FENCE] Bug #448822: fence_ilo doesn't work with iLO [FENCE]: Fix #237266: New fence agent for HMC/LPAR .gitignore | 1 + COPYING.applications | 339 ---------------------- COPYING.libraries | 510 --------------------------------- COPYRIGHT | 242 ---------------- Makefile | 11 +- README.licence | 33 --- ccs/daemon/cnx_mgr.c | 8 + ccs/daemon/misc.c | 1 - cman/daemon/ais.c | 4 +- cman/daemon/commands.c | 6 +- cman/daemon/daemon.c | 4 +- cman/qdisk/crc32.c | 8 - cman/qdisk/daemon_init.c | 16 +- cman/qdisk/disk.h | 1 - cman/qdisk/disk_util.c | 69 +----- cman/qdisk/main.c | 88 ++---- cman/qdisk/proc.c | 8 +- cman/qdisk/scandisk.c | 32 ++- cman/qdisk/score.c | 56 +---- cman/qdisk/score.h | 5 - configure | 15 + doc/COPYING.applications | 339 ++++++++++++++++++++++ doc/COPYING.libraries | 510 +++++++++++++++++++++++++++++++++ doc/COPYRIGHT | 242 ++++++++++++++++ doc/Makefile | 17 ++ doc/README.licence | 33 +++ fence/agents/egenera/fence_egenera.pl | 22 ++- fence/agents/ilo/fence_ilo.py | 99 ++++--- fence/agents/lib/Makefile | 2 +- fence/agents/lib/fencing.py.py | 18 ++- fence/agents/lib/telnet_ssl.py | 72 +++++ fence/agents/lpar/Makefile | 18 ++ fence/agents/lpar/fence_lpar.py | 97 +++++++ fence/agents/xvm/fence_xvm.c | 4 +- fence/agents/xvm/fence_xvmd.c | 43 +++- fence/agents/xvm/options.c | 1 - fence/agents/xvm/xml.c | 4 +- fence/man/fence_xvmd.8 | 7 + gfs-kernel/src/gfs/bits.c | 2 +- gfs/gfs_debug/readfile.c | 4 +- gfs/gfs_fsck/fs_bits.c | 13 +- gfs/gfs_fsck/fs_dir.c | 4 +- gfs/gfs_fsck/fs_inode.c | 2 +- gfs/gfs_fsck/log.c | 8 +- gfs/gfs_fsck/main.c | 18 +- gfs/gfs_fsck/pass2.c | 4 +- gfs/gfs_fsck/pass5.c | 4 +- gfs/gfs_fsck/rgrp.c | 4 +- gfs/gfs_fsck/super.c | 19 +- gfs/gfs_fsck/util.c | 6 +- gfs/gfs_mkfs/main.c | 4 +- gfs/gfs_tool/counters.c | 2 +- gfs/gfs_tool/main.c | 2 +- gfs/gfs_tool/misc.c | 6 +- gfs/gfs_tool/sb.c | 11 +- gfs/libgfs/file.c | 2 +- gfs/libgfs/fs_bits.c | 6 +- gfs/libgfs/fs_dir.c | 6 +- gfs/libgfs/fs_inode.c | 2 +- gfs/libgfs/log.c | 8 +- gfs/libgfs/rgrp.c | 8 +- gfs/libgfs/util.c | 6 +- gfs2/edit/hexedit.c | 6 +- gfs2/edit/savemeta.c | 13 + gfs2/fsck/lost_n_found.c | 26 ++- gfs2/libgfs2/super.c | 1 + gfs2/man/mkfs.gfs2.8 | 11 +- gfs2/mkfs/main_mkfs.c | 29 ++- gfs2/quota/main.c | 19 +- gfs2/tool/df.c | 9 +- gnbd-kernel/src/gnbd.c | 62 ++++- gnbd-kernel/src/gnbd.h | 3 + make/clean.mk | 3 +- make/defines.mk.input | 1 + rgmanager/src/clulib/cman.c | 6 +- rgmanager/src/clulib/daemon_init.c | 14 +- rgmanager/src/clulib/msg_cluster.c | 26 ++- rgmanager/src/clulib/msgtest.c | 3 +- rgmanager/src/daemons/clurmtabd_lib.c | 2 +- rgmanager/src/daemons/main.c | 3 +- rgmanager/src/resources/ip.sh | 13 +- 81 files changed, 1872 insertions(+), 1514 deletions(-) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) iQIVAwUBSHrmdggUGcMLQ3qJAQLcww/9Esm6ygIuGGZ4ycMcKtcob6qmI3dcY1K3 YfaKm5g0iDF9bNQVwiZPyMLiFUdre9wxhx7Eh7rWqI/a728osxTInXktiOlo6kcR NEkA3AyX2A2MbmJf59aTTSDzI0EJ+I2IkNv54pyXwoZVmHNBnR2a6/J/afYk16K5 hq5/SNxBSf9bGEjfo+1D7ntOwQZ8eCcIgw8FnY3kkdcM4ZkkcKKXQO8X8q4tlgXr Euq4GUh8WjkkTKtPxxLlyMfqc9Jo/G2UwESgT0XGyEHm45Ao7ye4opVmLu8516rw lOJje35+MkGfuCQROGZn9C4ZxGNVQf3CaiXzwYLBQKbyPiR31BaKEVOmwPiX84f5 TgOrdJWPxPHudaCUpgkEdORKl5iM8XHR+wokBegNmttF38ouA7R9ndtgv4lMbqbI vh9GKnVfmeBjtU2TAKlvHaLsrM+EBOkG6O8Jp010cb77hVxpf3TxMi8hrN1QGdlo 1ImDzRkTvWNTaGp++MGc0mm6VGaZPsc5VCvI0KERphF8CduP4y5Qtq2fp4wWZMbR exMqvraz1odGTRNjfit5+fEV4pV7FOYwwAjlGt7GU86qVaZHsLrJlXQ1R47lE0k1 Uuvia7lL83Prr/e7zF+AOT/Y3UVvMht+c5JP1lTV8AjIsX52BEvqyaVmQNLNA6NI Ps1i6yrqEn8= =ba+3 -----END PGP SIGNATURE----- From ajeet.singh.raina at logica.com Mon Jul 14 07:07:58 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 12:37:58 +0530 Subject: [Linux-cluster] GFS Installation on iSCSI.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17960@in-ex004.groupinfra.com> Hello Guys, My Machine information is: [root at BL02DL385 ~]# uname -arn Linux BL02DL385 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 2005 x86_64 x86_64 x86_64 GNU/Linux I have downloaded the GFS Package : [root at BL02DL385 ~]# rpm -qa GFS GFS-6.1.15-3 But I am getting other Packages matching my architecture.All I was searching src package which I can rebuild myself. But Wonder whats the steps for that? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Mon Jul 14 09:12:34 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 14:42:34 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com> I have an old Cluster RPM installed on my machine.Now I have got cluster-2.03.04 Package. How Can I install it? When I tried untarring the package and installing,it threw the following error: [root at loy cluster-2.03.04]# ./configure Configuring Makefiles for your system... Checking tree: nothing to do Checking kernel: Unable to find (/usr/src/linux/Makefile)! Make sure that: - the above path is correct - your kernel is properly configured and prepared. - kernel_build and kernel_src options to configure are set properly. [root at loy cluster-2.03.04]# cd I am also not getting any Doc for that. Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsrlinux at gmail.com Mon Jul 14 09:27:25 2008 From: gsrlinux at gmail.com (GS R) Date: Mon, 14 Jul 2008 14:57:25 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com> Message-ID: <487B1BFD.9030707@gmail.com> Singh Raina, Ajeet wrote: > > I have an old Cluster RPM installed on my machine.Now I have got > cluster-2.03.04 Package. > > How Can I install it? > > When I tried untarring the package and installing,it threw the > following error: > > [root at loy cluster-2.03.04]# ./configure > > Configuring Makefiles for your system... > > Checking tree: nothing to do > > Checking kernel: > > Unable to find (/usr/src/linux/Makefile)! > Hi Ajeet, Check if you have the kernel development packages installed? If yes then do a [root at gsr1 ~]# cd /usr/src/ [root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux and then try to ./configure again. Let us know if that helps. Thanks Gowrishankar Rajaiyan From ajeet.singh.raina at logica.com Mon Jul 14 09:35:07 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 15:05:07 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <487B1BFD.9030707@gmail.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com> I did the steps said by you but it is throwing error: [root at BL01DL385 cluster-2.03.04]# ./configure Configuring Makefiles for your system... Checking tree: nothing to do Checking kernel: Current kernel version: 2.6.9 Minimum kernel version: 2.6.25 FAILED! Should I have to upgrade the kernel Version. [root at BL01DL385 cluster-2.03.04]# -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R Sent: Monday, July 14, 2008 2:57 PM To: linux clustering Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Singh Raina, Ajeet wrote: > > I have an old Cluster RPM installed on my machine.Now I have got > cluster-2.03.04 Package. > > How Can I install it? > > When I tried untarring the package and installing,it threw the > following error: > > [root at loy cluster-2.03.04]# ./configure > > Configuring Makefiles for your system... > > Checking tree: nothing to do > > Checking kernel: > > Unable to find (/usr/src/linux/Makefile)! > Hi Ajeet, Check if you have the kernel development packages installed? If yes then do a [root at gsr1 ~]# cd /usr/src/ [root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux and then try to ./configure again. Let us know if that helps. Thanks Gowrishankar Rajaiyan -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From ajeet.singh.raina at logica.com Mon Jul 14 09:38:05 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 15:08:05 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17967@in-ex004.groupinfra.com> Which RHEL version I need to install on my system? Pls help. -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 3:05 PM To: linux clustering Subject: RE: [Linux-cluster] How to Install Cluster-2.03.<> Package.. I did the steps said by you but it is throwing error: [root at BL01DL385 cluster-2.03.04]# ./configure Configuring Makefiles for your system... Checking tree: nothing to do Checking kernel: Current kernel version: 2.6.9 Minimum kernel version: 2.6.25 FAILED! Should I have to upgrade the kernel Version. [root at BL01DL385 cluster-2.03.04]# -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R Sent: Monday, July 14, 2008 2:57 PM To: linux clustering Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Singh Raina, Ajeet wrote: > > I have an old Cluster RPM installed on my machine.Now I have got > cluster-2.03.04 Package. > > How Can I install it? > > When I tried untarring the package and installing,it threw the > following error: > > [root at loy cluster-2.03.04]# ./configure > > Configuring Makefiles for your system... > > Checking tree: nothing to do > > Checking kernel: > > Unable to find (/usr/src/linux/Makefile)! > Hi Ajeet, Check if you have the kernel development packages installed? If yes then do a [root at gsr1 ~]# cd /usr/src/ [root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux and then try to ./configure again. Let us know if that helps. Thanks Gowrishankar Rajaiyan -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From gsrlinux at gmail.com Mon Jul 14 10:02:54 2008 From: gsrlinux at gmail.com (GS R) Date: Mon, 14 Jul 2008 15:32:54 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com> Message-ID: <487B244E.7020003@gmail.com> Singh Raina, Ajeet wrote: > I did the steps said by you but it is throwing error: > > > [root at BL01DL385 cluster-2.03.04]# ./configure > > Configuring Makefiles for your system... > > Checking tree: nothing to do > > Checking kernel: > Current kernel version: 2.6.9 > Minimum kernel version: 2.6.25 > FAILED! > > Should I have to upgrade the kernel Version. > Yes. You will have to upgrade the kernel. Check http://www.kernel.org/ for the latest stable kernel. Thanks Gowrishankar Rajaiyan From ajeet.singh.raina at logica.com Mon Jul 14 10:05:39 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 15:35:39 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <487B244E.7020003@gmail.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation flush this out? Can you let me know the quick steps to do that? I checked with the List of kernel version and I think RHEL 4 Update 3 will be the right choice? -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R Sent: Monday, July 14, 2008 3:33 PM To: linux clustering Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Singh Raina, Ajeet wrote: > I did the steps said by you but it is throwing error: > > > [root at BL01DL385 cluster-2.03.04]# ./configure > > Configuring Makefiles for your system... > > Checking tree: nothing to do > > Checking kernel: > Current kernel version: 2.6.9 > Minimum kernel version: 2.6.25 > FAILED! > > Should I have to upgrade the kernel Version. > Yes. You will have to upgrade the kernel. Check http://www.kernel.org/ for the latest stable kernel. Thanks Gowrishankar Rajaiyan -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From gsrlinux at gmail.com Mon Jul 14 10:16:18 2008 From: gsrlinux at gmail.com (GS R) Date: Mon, 14 Jul 2008 15:46:18 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com> Message-ID: <487B2772.4090009@gmail.com> Singh Raina, Ajeet wrote: > I have already setup Cluster-0.9 version Setup.Will Kernel upgradation > flush this out? > Upgrading the kernel should not flush out anything simply because your previous kernel is intact and you can boot into it. But make sure you do a /-Uvh/ and not a/ -ivh/. > Can you let me know the quick steps to do that? > quick steps of what? Not clear what steps you are expecting here. > I checked with the List of kernel version and I think RHEL 4 Update 3 > will be the right choice? > > I am not sure about the RHEL version here. Thats for you to confirm it. :-) Thanks Gowrishankar Rajaiyan > -----Original Message----- > From: linux-cluster-bounces at redhat.com > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R > Sent: Monday, July 14, 2008 3:33 PM > To: linux clustering > Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. > > Singh Raina, Ajeet wrote: > >> I did the steps said by you but it is throwing error: >> >> >> [root at BL01DL385 cluster-2.03.04]# ./configure >> >> Configuring Makefiles for your system... >> >> Checking tree: nothing to do >> >> Checking kernel: >> Current kernel version: 2.6.9 >> Minimum kernel version: 2.6.25 >> FAILED! >> >> Should I have to upgrade the kernel Version. >> >> > Yes. You will have to upgrade the kernel. > Check http://www.kernel.org/ for the latest stable kernel. > > Thanks > Gowrishankar Rajaiyan > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > From ajeet.singh.raina at logica.com Mon Jul 14 10:20:28 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 15:50:28 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <487B2772.4090009@gmail.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com> I am newbie to Kernel Upgradation.I downloaded the patch but donno know how to proceed further.The patch is in .bzip2 format and all I did is run bunzip2 and that did the untar for me. Can you help me with further step? -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R Sent: Monday, July 14, 2008 3:46 PM To: linux clustering Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Singh Raina, Ajeet wrote: > I have already setup Cluster-0.9 version Setup.Will Kernel upgradation > flush this out? > Upgrading the kernel should not flush out anything simply because your previous kernel is intact and you can boot into it. But make sure you do a /-Uvh/ and not a/ -ivh/. > Can you let me know the quick steps to do that? > quick steps of what? Not clear what steps you are expecting here. > I checked with the List of kernel version and I think RHEL 4 Update 3 > will be the right choice? > > I am not sure about the RHEL version here. Thats for you to confirm it. :-) Thanks Gowrishankar Rajaiyan > -----Original Message----- > From: linux-cluster-bounces at redhat.com > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R > Sent: Monday, July 14, 2008 3:33 PM > To: linux clustering > Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. > > Singh Raina, Ajeet wrote: > >> I did the steps said by you but it is throwing error: >> >> >> [root at BL01DL385 cluster-2.03.04]# ./configure >> >> Configuring Makefiles for your system... >> >> Checking tree: nothing to do >> >> Checking kernel: >> Current kernel version: 2.6.9 >> Minimum kernel version: 2.6.25 >> FAILED! >> >> Should I have to upgrade the kernel Version. >> >> > Yes. You will have to upgrade the kernel. > Check http://www.kernel.org/ for the latest stable kernel. > > Thanks > Gowrishankar Rajaiyan > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From gsrlinux at gmail.com Mon Jul 14 10:38:33 2008 From: gsrlinux at gmail.com (GS R) Date: Mon, 14 Jul 2008 16:08:33 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com> Message-ID: <487B2CA9.3070601@gmail.com> Singh Raina, Ajeet wrote: > I am newbie to Kernel Upgradation.I downloaded the patch but donno know > how to proceed further.The patch is in .bzip2 format and all I did is > run bunzip2 and that did the untar for me. > Can you help me with further step? > Hope you are doing this on a test machine. Do not try patching your kernel if you are not sure what you are doing. That might be harmful. Try downloading the complete kernel RPM and upgrade it. Check for kernel RPMS: http://rpmfind.net/linux/rpm2html/search.php?query=kernel&submit=Search+...&system=&arch= http://rpmfind.net/linux/rpm2html/search.php?query=kernel-devel&submit=Search+...&system=&arch= ftp://rpmfind.net/linux/fedora/releases/9/Everything/i386/os/Packages/kernel-2.6.25-14.fc9.i686.rpm Thanks Gowrishankar Rajaiyan > -----Original Message----- > From: linux-cluster-bounces at redhat.com > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R > Sent: Monday, July 14, 2008 3:46 PM > To: linux clustering > Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. > > Singh Raina, Ajeet wrote: > >> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation >> flush this out? >> >> > Upgrading the kernel should not flush out anything simply because your > previous kernel is intact and you can boot into it. > But make sure you do a /-Uvh/ and not a/ -ivh/. > >> Can you let me know the quick steps to do that? >> >> > quick steps of what? Not clear what steps you are expecting here. > >> I checked with the List of kernel version and I think RHEL 4 Update 3 >> will be the right choice? >> >> >> > I am not sure about the RHEL version here. Thats for you to confirm it. > :-) > > Thanks > Gowrishankar Rajaiyan > > > >> -----Original Message----- >> From: linux-cluster-bounces at redhat.com >> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R >> Sent: Monday, July 14, 2008 3:33 PM >> To: linux clustering >> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. >> >> Singh Raina, Ajeet wrote: >> >> >>> I did the steps said by you but it is throwing error: >>> >>> >>> [root at BL01DL385 cluster-2.03.04]# ./configure >>> >>> Configuring Makefiles for your system... >>> >>> Checking tree: nothing to do >>> >>> Checking kernel: >>> Current kernel version: 2.6.9 >>> Minimum kernel version: 2.6.25 >>> FAILED! >>> >>> Should I have to upgrade the kernel Version. >>> >>> >>> >> Yes. You will have to upgrade the kernel. >> Check http://www.kernel.org/ for the latest stable kernel. >> >> Thanks >> Gowrishankar Rajaiyan >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> This e-mail and any attachment is for authorised use by the intended >> > recipient(s) only. It may contain proprietary material, confidential > information and/or be subject to legal privilege. It should not be > copied, disclosed to, retained or used by, any other party. If you are > not an intended recipient then please promptly delete this e-mail and > any attachment and all copies and inform the sender. Thank you. > >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > From ajeet.singh.raina at logica.com Mon Jul 14 11:46:05 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 17:16:05 +0530 Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package.. In-Reply-To: <487B2CA9.3070601@gmail.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796B@in-ex004.groupinfra.com> I downloaded an old cluster version 2.00.00<> And tried to install. [root at BL02DL385 cluster-2.00.00]# ./configure configure gnbd-kernel Configuring Makefiles for your system... Can't open /usr/src/linux-2.6/include/linux/version.h at ./configure line 95. configure ccs ^[[D Configuring Makefiles for your system... Completed Makefile configuration configure cman Configuring Makefiles for your system... Completed Makefile configuration configure group Configuring Makefiles for your system... Completed Makefile configuration configure dlm Configuring Makefiles for your system... Completed Makefile configuration configure fence Configuring Makefiles for your system... Completed Makefile configuration configure gfs-kernel Configuring Makefiles for your system... Can't open /usr/src/linux-2.6/include/linux/version.h at ./configure line 107. configure gfs Configuring Makefiles for your system... Completed Makefile configuration configure gfs2 Configuring Makefiles for your system... Completed Makefile configuration configure gnbd Configuring Makefiles for your system... Completed Makefile configuration configure rgmanager Configuring Makefiles for your system... Completed Makefile configuration [root at BL02DL385 cluster-2.00.00]# ls ccs cman dlm fence gfs2 gnbd group rgmanager clumon configure doc gfs gfs-kernel gnbd-kernel Makefile scripts [root at BL02DL385 cluster-2.00.00]# make make -C gnbd-kernel all make[1]: Entering directory `/root/cluster-2.00.00/gnbd-kernel' make -C src all make[2]: Entering directory `/root/cluster-2.00.00/gnbd-kernel/src' make -C M=/root/cluster-2.00.00/gnbd-kernel/src modules USING_KBUILD=yes make: *** M=/root/cluster-2.00.00/gnbd-kernel/src: No such file or directory. Stop. make: Entering an unknown directorymake: Leaving an unknown directorymake[2]: *** [all] Error 2 make[2]: Leaving directory `/root/cluster-2.00.00/gnbd-kernel/src' make[1]: *** [all] Error 2 make[1]: Leaving directory `/root/cluster-2.00.00/gnbd-kernel' make: *** [all] Error 2 Any idea why now this issue am I facing? -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R Sent: Monday, July 14, 2008 4:09 PM To: linux clustering Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. Singh Raina, Ajeet wrote: > I am newbie to Kernel Upgradation.I downloaded the patch but donno know > how to proceed further.The patch is in .bzip2 format and all I did is > run bunzip2 and that did the untar for me. > Can you help me with further step? > Hope you are doing this on a test machine. Do not try patching your kernel if you are not sure what you are doing. That might be harmful. Try downloading the complete kernel RPM and upgrade it. Check for kernel RPMS: http://rpmfind.net/linux/rpm2html/search.php?query=kernel&submit=Search+ ...&system=&arch= http://rpmfind.net/linux/rpm2html/search.php?query=kernel-devel&submit=S earch+...&system=&arch= ftp://rpmfind.net/linux/fedora/releases/9/Everything/i386/os/Packages/ke rnel-2.6.25-14.fc9.i686.rpm Thanks Gowrishankar Rajaiyan > -----Original Message----- > From: linux-cluster-bounces at redhat.com > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R > Sent: Monday, July 14, 2008 3:46 PM > To: linux clustering > Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. > > Singh Raina, Ajeet wrote: > >> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation >> flush this out? >> >> > Upgrading the kernel should not flush out anything simply because your > previous kernel is intact and you can boot into it. > But make sure you do a /-Uvh/ and not a/ -ivh/. > >> Can you let me know the quick steps to do that? >> >> > quick steps of what? Not clear what steps you are expecting here. > >> I checked with the List of kernel version and I think RHEL 4 Update 3 >> will be the right choice? >> >> >> > I am not sure about the RHEL version here. Thats for you to confirm it. > :-) > > Thanks > Gowrishankar Rajaiyan > > > >> -----Original Message----- >> From: linux-cluster-bounces at redhat.com >> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R >> Sent: Monday, July 14, 2008 3:33 PM >> To: linux clustering >> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package.. >> >> Singh Raina, Ajeet wrote: >> >> >>> I did the steps said by you but it is throwing error: >>> >>> >>> [root at BL01DL385 cluster-2.03.04]# ./configure >>> >>> Configuring Makefiles for your system... >>> >>> Checking tree: nothing to do >>> >>> Checking kernel: >>> Current kernel version: 2.6.9 >>> Minimum kernel version: 2.6.25 >>> FAILED! >>> >>> Should I have to upgrade the kernel Version. >>> >>> >>> >> Yes. You will have to upgrade the kernel. >> Check http://www.kernel.org/ for the latest stable kernel. >> >> Thanks >> Gowrishankar Rajaiyan >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> This e-mail and any attachment is for authorised use by the intended >> > recipient(s) only. It may contain proprietary material, confidential > information and/or be subject to legal privilege. It should not be > copied, disclosed to, retained or used by, any other party. If you are > not an intended recipient then please promptly delete this e-mail and > any attachment and all copies and inform the sender. Thank you. > >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > > This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. > > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From ajeet.singh.raina at logica.com Mon Jul 14 12:14:00 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 17:44:00 +0530 Subject: [Linux-cluster] KNowing CLuster Version.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com> How can I know which cluster I have installed my system with. I can see the version through system-config-cluster > Help.And it says: 1.9.<>. I don't even see any entry in cluster.conf which shows the cluster version? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Norbert.Nemeth at mscibarra.com Mon Jul 14 12:26:46 2008 From: Norbert.Nemeth at mscibarra.com (Nemeth, Norbert) Date: Mon, 14 Jul 2008 14:26:46 +0200 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com> Message-ID: # cman_tool status 1st line Norbert N?meth From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 2:14 PM To: linux clustering Subject: [Linux-cluster] KNowing CLuster Version.. How can I know which cluster I have installed my system with. I can see the version through system-config-cluster > Help.And it says: 1.9.<>. I don't even see any entry in cluster.conf which shows the cluster version? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. ________________________________ NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error. Local registered entity: MSCI KFT Metropolitan Court acting as the Court of Registry Registered office: 1138 Budapest, N?pf?rdo utca 22, Hungary Registration No. 01-09-885383 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Mon Jul 14 12:31:30 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 18:01:30 +0530 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796D@in-ex004.groupinfra.com> [root at BL02DL385 ~]# cman_tool status Protocol version: 5.0.1 Config version: 74 Cluster name: Test_Cluster Cluster ID: 59828 Cluster Member: Yes Membership state: Cluster-Member Nodes: 2 Expected_votes: 1 Total_votes: 2 Quorum: 1 Active subsystems: 1 Node name: BL02DL385 Node addresses: 10.14.236.106 That's not correct.It shows 5.0.1 but what I can see ftp://sources.redhat.com/pub/cluster/releases/ .It doesn't matches with any. Actually I am planning to install the same version of Cluster since I am finding difficult to get GFS-module-smp package for my RHEL 4 Update 2 x86_64 system. Can you help me PlssS? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Nemeth, Norbert Sent: Monday, July 14, 2008 5:57 PM To: linux clustering Subject: [Linux-cluster] RE: KNowing CLuster Version.. # cman_tool status 1st line Norbert N?meth From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 2:14 PM To: linux clustering Subject: [Linux-cluster] KNowing CLuster Version.. How can I know which cluster I have installed my system with. I can see the version through system-config-cluster > Help.And it says: 1.9.<>. I don't even see any entry in cluster.conf which shows the cluster version? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. ________________________________ NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error. Local registered entity: MSCI KFT Metropolitan Court acting as the Court of Registry Registered office: 1138 Budapest, N?pf?rd? utca 22, Hungary Registration No. 01-09-885383 This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Mon Jul 14 12:35:16 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 18:05:16 +0530 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796D@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796E@in-ex004.groupinfra.com> How Can I upgrade my Cluster Version? I need Cluster version more than 2.<>. I tried downloading cluster software through ftp://sources.redhat.com/pub/cluster/releases/ but donno how to proceed. Ajeet ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 6:02 PM To: linux clustering Subject: RE: [Linux-cluster] RE: KNowing CLuster Version.. [root at BL02DL385 ~]# cman_tool status Protocol version: 5.0.1 Config version: 74 Cluster name: Test_Cluster Cluster ID: 59828 Cluster Member: Yes Membership state: Cluster-Member Nodes: 2 Expected_votes: 1 Total_votes: 2 Quorum: 1 Active subsystems: 1 Node name: BL02DL385 Node addresses: 10.14.236.106 That's not correct.It shows 5.0.1 but what I can see ftp://sources.redhat.com/pub/cluster/releases/ .It doesn't matches with any. Actually I am planning to install the same version of Cluster since I am finding difficult to get GFS-module-smp package for my RHEL 4 Update 2 x86_64 system. Can you help me PlssS? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Nemeth, Norbert Sent: Monday, July 14, 2008 5:57 PM To: linux clustering Subject: [Linux-cluster] RE: KNowing CLuster Version.. # cman_tool status 1st line Norbert N?meth From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 2:14 PM To: linux clustering Subject: [Linux-cluster] KNowing CLuster Version.. How can I know which cluster I have installed my system with. I can see the version through system-config-cluster > Help.And it says: 1.9.<>. I don't even see any entry in cluster.conf which shows the cluster version? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. ________________________________ NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error. Local registered entity: MSCI KFT Metropolitan Court acting as the Court of Registry Registered office: 1138 Budapest, N?pf?rd? utca 22, Hungary Registration No. 01-09-885383 This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccaulfie at redhat.com Mon Jul 14 12:36:05 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 14 Jul 2008 13:36:05 +0100 Subject: [Linux-cluster] KNowing CLuster Version.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com> Message-ID: <487B4835.8030002@redhat.com> Singh Raina, Ajeet wrote: > How can I know which cluster I have installed my system with. > > I can see the version through system-config-cluster > Help.And it says: > 1.9.<>. > > I don?t even see any entry in cluster.conf which shows the cluster version? It rather depends on what you actually need to know. the first line of "cman_tool status" shows the cman protocol version. This translates as 5.x.x RHEL4 aka Cluster1 6.x.x RHEL5 aha Cluster2 But cluster3 also uses 6.x.x because we are aiming at backward compatibility from cluster 2 to cluster3. "cluster" is a collection of components and they all have their own versions depending on which updates have been applied to which components. The only way to be totally precise about which "cluster" version you have is to list the RPM or DEB version numbers of the packages. -- Chrissie From ajeet.singh.raina at logica.com Mon Jul 14 12:40:27 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 18:10:27 +0530 Subject: [Linux-cluster] KNowing CLuster Version.. In-Reply-To: <487B4835.8030002@redhat.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796F@in-ex004.groupinfra.com> What do you think this cluster version speak of : [root at BL02DL385 ~]# rpm -qa cman cman-1.0.8-0 [root at BL02DL385 ~]# rpm -qa ccsd [root at BL02DL385 ~]# rpm -qa ccs ccs-1.0.10-0 [root at BL02DL385 ~]# rpm -qa rgmanager rgmanager-1.9.68-1 [root at BL02DL385 ~]# rpm -qa GFS GFS-6.1.15-3 [root at BL02DL385 ~]# rpm -qa GFS-module-smp [root at BL02DL385 ~]# rpm -qa gulm gulm-1.0.10-0 [root at BL02DL385 ~]# rpm -qa system-config-cluster system-config-cluster-1.0.45-1.0 [root at BL02DL385 ~]# -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Christine Caulfield Sent: Monday, July 14, 2008 6:06 PM To: linux clustering Subject: Re: [Linux-cluster] KNowing CLuster Version.. Singh Raina, Ajeet wrote: > How can I know which cluster I have installed my system with. > > I can see the version through system-config-cluster > Help.And it says: > 1.9.<>. > > I don't even see any entry in cluster.conf which shows the cluster version? It rather depends on what you actually need to know. the first line of "cman_tool status" shows the cman protocol version. This translates as 5.x.x RHEL4 aka Cluster1 6.x.x RHEL5 aha Cluster2 But cluster3 also uses 6.x.x because we are aiming at backward compatibility from cluster 2 to cluster3. "cluster" is a collection of components and they all have their own versions depending on which updates have been applied to which components. The only way to be totally precise about which "cluster" version you have is to list the RPM or DEB version numbers of the packages. -- Chrissie -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From ccaulfie at redhat.com Mon Jul 14 12:46:17 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 14 Jul 2008 13:46:17 +0100 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796E@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B1796E@in-ex004.groupinfra.com> Message-ID: <487B4A99.1040000@redhat.com> Singh Raina, Ajeet wrote: > How Can I upgrade my Cluster Version? I need Cluster version more than 2.<>. There aren't any (stable) versions later than cluster 2 > I tried downloading cluster software through > ftp://sources.redhat.com/pub/cluster/releases/ but donno how to proceed. > If you don't know what to do with a source tarball, then I strongly recommend you do NOTHING with it. Looking at your later messages you already have RPMs installed, so simply building and installing the tarball with its defaults could have some very nasty effects on your cluster if you don't do it properly. If you're using RPMs and you want to upgrade, get updated RPMs. It's the only way to stay sane :-) Chrissie From ajeet.singh.raina at logica.com Mon Jul 14 12:51:07 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 18:21:07 +0530 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: <487B4A99.1040000@redhat.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17970@in-ex004.groupinfra.com> Thank for the advise. Anyway, My Main Purpose is to install GFS and I was not getting the right package for the same.I have GFS installed but GFS-module-smp is something sucks me everytime reposrting kernel unsupported error.Checked with rh.redhat.com but that dint help. Have You any idea how gonna I find the right package for GFS-module installation. I am not even getting src rpm which can compile and do the needful. Pls Help My Machine is: [root at BL01DL385 ~]# uname -arn Linux BL01DL385 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 2005 x86_64 x86_64 x86_64 GNU/Linux -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Christine Caulfield Sent: Monday, July 14, 2008 6:16 PM To: linux clustering Subject: Re: [Linux-cluster] RE: KNowing CLuster Version.. Singh Raina, Ajeet wrote: > How Can I upgrade my Cluster Version? I need Cluster version more than 2.<>. There aren't any (stable) versions later than cluster 2 > I tried downloading cluster software through > ftp://sources.redhat.com/pub/cluster/releases/ but donno how to proceed. > If you don't know what to do with a source tarball, then I strongly recommend you do NOTHING with it. Looking at your later messages you already have RPMs installed, so simply building and installing the tarball with its defaults could have some very nasty effects on your cluster if you don't do it properly. If you're using RPMs and you want to upgrade, get updated RPMs. It's the only way to stay sane :-) Chrissie -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From ajeet.singh.raina at logica.com Mon Jul 14 13:03:00 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 14 Jul 2008 18:33:00 +0530 Subject: [Linux-cluster] RE: KNowing CLuster Version.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796E@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17971@in-ex004.groupinfra.com> Now I have installed new version packages: ccs-1.0.10-0.x86_64.rpm ccs-devel-1.0.10-0.x86_64.rpm cluster-cim-0.9.1-8.x86_64.rpm cluster-snmp-0.9.1-8.x86_64.rpm cman-1.0.17-0.x86_64.rpm cman-devel-1.0.17-0.x86_64.rpm cman-kernel-2.6.9-50.2.x86_64.rpm cman-kernel-largesmp-2.6.9-50.2.x86_64.rpm cman-kernel-smp-2.6.9-50.2.x86_64.rpm cman-kernheaders-2.6.9-50.2.x86_64.rpm dlm-1.0.3-1.x86_64.rpm dlm-devel-1.0.3-1.x86_64.rpm dlm-kernel-2.6.9-46.16.x86_64.rpm dlm-kernel-largesmp-2.6.9-46.16.x86_64.rpm dlm-kernel-smp-2.6.9-46.16.x86_64.rpm dlm-kernheaders-2.6.9-46.16.x86_64.rpm fence-1.32.45-1.x86_64.rpm gulm-1.0.10-0.x86_64.rpm gulm-devel-1.0.10-0.x86_64.rpm iddev-2.0.0-4.x86_64.rpm iddev-devel-2.0.0-4.x86_64.rpm ipvsadm-1.24-6.x86_64.rpm luci-0.9.1-8.x86_64.rpm magma-1.0.7-1.x86_64.rpm magma-devel-1.0.7-1.x86_64.rpm magma-plugins-1.0.12-0.x86_64.rpm modcluster-0.9.1-8.x86_64.rpm perl-Net-Telnet-3.03-3.noarch.rpm piranha-0.8.3-1.x86_64.rpm rgmanager-1.9.68-1.x86_64.rpm ricci-0.9.1-8.x86_64.rpm system-config-cluster-1.0.45-1.0.noarch.rpm And when I run #cman_tool status it shows: [root at BL01DL385 clusterrpms]# cman_tool status Protocol version: 5.0.1 Config version: 74 Cluster name: Test_Cluster Cluster ID: 59828 Cluster Member: Yes Membership state: Cluster-Member Nodes: 2 Expected_votes: 1 Total_votes: 2 Quorum: 1 Active subsystems: 4 Node name: BL01DL385 Node addresses: 10.14.236.108 [root at BL01DL385 clusterrpms]# Did it upgraded the cluster version? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 6:05 PM To: linux clustering Subject: RE: [Linux-cluster] RE: KNowing CLuster Version.. How Can I upgrade my Cluster Version? I need Cluster version more than 2.<>. I tried downloading cluster software through ftp://sources.redhat.com/pub/cluster/releases/ but donno how to proceed. Ajeet ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 6:02 PM To: linux clustering Subject: RE: [Linux-cluster] RE: KNowing CLuster Version.. [root at BL02DL385 ~]# cman_tool status Protocol version: 5.0.1 Config version: 74 Cluster name: Test_Cluster Cluster ID: 59828 Cluster Member: Yes Membership state: Cluster-Member Nodes: 2 Expected_votes: 1 Total_votes: 2 Quorum: 1 Active subsystems: 1 Node name: BL02DL385 Node addresses: 10.14.236.106 That's not correct.It shows 5.0.1 but what I can see ftp://sources.redhat.com/pub/cluster/releases/ .It doesn't matches with any. Actually I am planning to install the same version of Cluster since I am finding difficult to get GFS-module-smp package for my RHEL 4 Update 2 x86_64 system. Can you help me PlssS? ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Nemeth, Norbert Sent: Monday, July 14, 2008 5:57 PM To: linux clustering Subject: [Linux-cluster] RE: KNowing CLuster Version.. # cman_tool status 1st line Norbert N?meth From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 14, 2008 2:14 PM To: linux clustering Subject: [Linux-cluster] KNowing CLuster Version.. How can I know which cluster I have installed my system with. I can see the version through system-config-cluster > Help.And it says: 1.9.<>. I don't even see any entry in cluster.conf which shows the cluster version? Pls Help This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. ________________________________ size=2 width="100%" align=center> NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error. Local registered entity: MSCI KFT Metropolitan Court acting as the Court of Registry Registered office: 1138 Budapest, N?pf?rd? utca 22, Hungary Registration No. 01-09-885383 This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From beres.laszlo at sys-admin.hu Mon Jul 14 15:58:44 2008 From: beres.laszlo at sys-admin.hu (Laszlo BERES) Date: Mon, 14 Jul 2008 17:58:44 +0200 Subject: [Linux-cluster] openais initscript in RHEL Message-ID: <487B77B4.3090703@sys-admin.hu> Dear all, would you be so kind and tell me what is the exact purpose of the openais initscript in RHEL? Enabling that took my whole day, having horrible error messages and headache :] -- Laszlo BERES RHCE, RHCX senior IT engineer, trainer From andreas.schneider at f-it.biz Mon Jul 14 15:59:57 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Mon, 14 Jul 2008 17:59:57 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: <487B77B4.3090703@sys-admin.hu> Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From andreas.schneider at f-it.biz Mon Jul 14 16:12:55 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Mon, 14 Jul 2008 18:12:55 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From anujhere at gmail.com Mon Jul 14 16:40:13 2008 From: anujhere at gmail.com (=?UTF-8?Q?Anuj_Singh_(=E0=A4=85=E0=A4=A8=E0=A5=81=E0=A4=9C)?=) Date: Mon, 14 Jul 2008 22:10:13 +0530 Subject: [Linux-cluster] GFS Installation on iSCSI.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17960@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17960@in-ex004.groupinfra.com> Message-ID: <3120c9e30807140940j3fb5043ajc710972f8627b7a5@mail.gmail.com> http://sources.redhat.com/cluster/wiki/FAQ 2008/7/14 Singh Raina, Ajeet : > Hello Guys, > > My Machine information is: > > [root at BL02DL385 ~]# uname -arn > > Linux BL02DL385 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 2005 x86_64 > x86_64 x86_64 GNU/Linux > > I have downloaded the GFS Package : > > [root at BL02DL385 ~]# rpm -qa GFS > > GFS-6.1.15-3 > > But I am getting other Packages matching my architecture.All I was > searching src package which I can rebuild myself. > > But Wonder whats the steps for that? > > Pls Help > > This e-mail and any attachment is for authorised use by the intended > recipient(s) only. It may contain proprietary material, confidential > information and/or be subject to legal privilege. It should not be copied, > disclosed to, retained or used by, any other party. If you are not an > intended recipient then please promptly delete this e-mail and any > attachment and all copies and inform the sender. Thank you. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozgurakan at gmail.com Mon Jul 14 19:10:13 2008 From: ozgurakan at gmail.com (Ozgur Akan) Date: Mon, 14 Jul 2008 15:10:13 -0400 Subject: [Linux-cluster] gfs2 performance Message-ID: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> Hi, Unfortunately, we formatted 8TB volume with EXT3 and finally put it into production. I am really disappointed with GFS2 performance, it is not fast enough for large file systems with many files. On the other hand we still use GFS for a 350gb partition with low IO. GFS has many good promises but only for some specific environments with probably low IO, small number of files etc.. I think it can never be as fast as EXT3 because if its design and targets but something close would make us more than happy. best wishes, Ozgur Akan -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schneider at f-it.biz Mon Jul 14 19:11:10 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Mon, 14 Jul 2008 21:11:10 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From johnson.eric at gmail.com Mon Jul 14 21:31:06 2008 From: johnson.eric at gmail.com (eric johnson) Date: Mon, 14 Jul 2008 17:31:06 -0400 Subject: [Linux-cluster] gfs2 performance In-Reply-To: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> References: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> Message-ID: Hi Ozgur - It would be interesting to hear you elaborate on the domain of problems you were hoping to have GFS2 solve and then how you ultimately tackled them with just EXT3. I'm certainly not saying that one can't solve them with EXT3 - just curious to see the approach. -Eric 2008/7/14 Ozgur Akan : > Hi, > > Unfortunately, we formatted 8TB volume with EXT3 and finally put it into > production. > > I am really disappointed with GFS2 performance, it is not fast enough for > large file systems with many files. On the other hand we still use GFS for a > 350gb partition with low IO. GFS has many good promises but only for some > specific environments with probably low IO, small number of files etc.. > > I think it can never be as fast as EXT3 because if its design and targets > but something close would make us more than happy. > > best wishes, > Ozgur Akan > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schneider at f-it.biz Mon Jul 14 21:59:51 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Mon, 14 Jul 2008 23:59:51 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From joe.kraska at baesystems.com Mon Jul 14 22:08:26 2008 From: joe.kraska at baesystems.com (Kraska, Joe A (US SSA)) Date: Mon, 14 Jul 2008 15:08:26 -0700 Subject: [Linux-cluster] gfs2 performance References: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> Message-ID: I have to admit confusion here. GFS2 is a shared file system. EXT3 is not. I would expect shared file systems to always have at least somewhat worse performance than a local file system, for a variety of reasons... in particular the network, eh. Anyway, I'm curious about the status of GFS2, including: how well /ought/ it be working at this point? Joe. From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of eric johnson Sent: Monday, July 14, 2008 2:31 PM To: linux clustering Subject: Re: [Linux-cluster] gfs2 performance Hi Ozgur - It would be interesting to hear you elaborate on the domain of problems you were hoping to have GFS2 solve and then how you ultimately tackled them with just EXT3. I'm certainly not saying that one can't solve them with EXT3 - just curious to see the approach. -Eric 2008/7/14 Ozgur Akan : Hi, Unfortunately, we formatted 8TB volume with EXT3 and finally put it into production. I am really disappointed with GFS2 performance, it is not fast enough for large file systems with many files. On the other hand we still use GFS for a 350gb partition with low IO. GFS has many good promises but only for some specific environments with probably low IO, small number of files etc.. I think it can never be as fast as EXT3 because if its design and targets but something close would make us more than happy. best wishes, Ozgur Akan -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An HTML attachment was scrubbed... URL: From theophanis_kontogiannis at yahoo.gr Mon Jul 14 22:22:16 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Tue, 15 Jul 2008 01:22:16 +0300 Subject: [Linux-cluster] Issue after gfs2 tools upgrade Message-ID: <011b01c8e600$13dc43d0$3b94cb70$@gr> Hello all I have 5.2 with 2.6.18-92.1.6.el5.centos.plus running for some time with drbd 8.2 Two days ago I made an upgrade to gfs2-utils-0.1.44-1.el5_2.1 Right after the upgrade when trying to mount, I started getting for my gfs2 (running on LV, over VG, over PV over DRBD): GFS2: fsid=: Trying to join cluster "lock_dlm", "tweety:gfs2-00" GFS2: fsid=tweety:gfs2-00.0: Joined cluster. Now mounting FS... GFS2: fsid=tweety:gfs2-00.0: jid=0, already locked for use GFS2: fsid=tweety:gfs2-00.0: jid=0: Looking at journal... GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = fs/gfs2/recovery.c, line = 239 GFS2: fsid=tweety:gfs2-00.0: about to withdraw this file system GFS2: fsid=tweety:gfs2-00.0: telling LM to withdraw dlm: closing connection to node 2 Trying to gfs2_fsck -vy /dev/mapper/vg0-data0 gives: Initializing fsck Initializing lists... Recovering journals (this may take a while)jid=0: Looking at journal... jid=0: Failed jid=1: Looking at journal... jid=1: Journal is clean. jid=2: Looking at journal... jid=2: Journal is clean. jid=3: Looking at journal... jid=3: Journal is clean. jid=4: Looking at journal... jid=4: Journal is clean. jid=5: Looking at journal... jid=5: Journal is clean. jid=6: Looking at journal... jid=6: Journal is clean. jid=7: Looking at journal... jid=7: Journal is clean. jid=8: Looking at journal... jid=8: Journal is clean. jid=9: Looking at journal... jid=9: Journal is clean. Journal recovery complete. Initializing special inodes... Validating Resource Group index. Level 1 RG check. (level 1 passed) 1392 resource groups found. Setting block ranges... Starting pass1 Checking metadata in Resource Group #0 Checking metadata in Resource Group #1 Checking metadata in Resource Group #2 .................... Checking metadata in Resource Group #1391 Pass1 complete Checking system inode 'master' System inode for 'master' is located at block 23 (0x17) Checking system inode 'root' System inode for 'root' is located at block 22 (0x16) Checking system inode 'inum' System inode for 'inum' is located at block 330990 (0x50cee) Checking system inode 'statfs' System inode for 'statfs' is located at block 330991 (0x50cef) Checking system inode 'jindex' System inode for 'jindex' is located at block 24 (0x18) Checking system inode 'rindex' System inode for 'rindex' is located at block 330992 (0x50cf0) Checking system inode 'quota' System inode for 'quota' is located at block 331026 (0x50d12) Checking system inode 'per_node' System inode for 'per_node' is located at block 328392 (0x502c8) Starting pass1b Looking for duplicate blocks... No duplicate blocks found Pass1b complete Starting pass1c Looking for inodes containing ea blocks... Pass1c complete Starting pass2 Checking system directory inode 'jindex' Checking system directory inode 'per_node' Checking system directory inode 'master' Checking system directory inode 'root' Checking directory inodes. Pass2 complete Starting pass3 Marking root inode connected Marking master directory inode connected Checking directory linkage. Pass3 complete Starting pass4 Checking inode reference counts. Pass4 complete Starting pass5 Verifying Resource Group #0 Verifying Resource Group #1 Verifying Resource Group #2 Verifying Resource Group #3 Verifying Resource Group #4 ............... Verifying Resource Group #1388 Verifying Resource Group #1389 Verifying Resource Group #1390 Verifying Resource Group #1391 Pass5 complete Writing changes to disk Syncing the device. Freeing buffers. gfs2_fsck complete Trying to mount again the fs I get the same error. Any ideas on the issue? Thank you all for your time. Theophanis Kontogiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Tue Jul 15 05:16:10 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Tue, 15 Jul 2008 10:46:10 +0530 Subject: [Linux-cluster] GFS on Shared Storage.. Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> Hi, Please help me getting rid of few confusions. I have setup two Cluster RHEL 4.0 Update2 Cluster Nodes.I installed all the RPM packages manually. I tried running a simple script and killing few process of first node,suddenly the other node took the reponsibilty and it was successful. I was running short of Shared Storage and Planned to setup iSCSI target (Shared Storage) and the two cluster nodes(initiator). Now I want to setup GFS.Do I have to setup GFS both on Cluster Nodes and Shared storage? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.schneider at f-it.biz Tue Jul 15 05:17:06 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Tue, 15 Jul 2008 07:17:06 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From ajeet.singh.raina at logica.com Tue Jul 15 05:18:01 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Tue, 15 Jul 2008 10:48:01 +0530 Subject: [Linux-cluster] GFS on Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17975@in-ex004.groupinfra.com> Till Now I haven't done any cluster package installation on Shared Storage,Do I need to install RPMs on Shared Storage too. ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Tuesday, July 15, 2008 10:46 AM To: linux clustering Subject: [Linux-cluster] GFS on Shared Storage.. Hi, Please help me getting rid of few confusions. I have setup two Cluster RHEL 4.0 Update2 Cluster Nodes.I installed all the RPM packages manually. I tried running a simple script and killing few process of first node,suddenly the other node took the reponsibilty and it was successful. I was running short of Shared Storage and Planned to setup iSCSI target (Shared Storage) and the two cluster nodes(initiator). Now I want to setup GFS.Do I have to setup GFS both on Cluster Nodes and Shared storage? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsrlinux at gmail.com Tue Jul 15 05:55:25 2008 From: gsrlinux at gmail.com (GS R) Date: Tue, 15 Jul 2008 11:25:25 +0530 Subject: [Linux-cluster] GFS on Shared Storage.. In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> Message-ID: <487C3BCD.9000609@gmail.com> Singh Raina, Ajeet wrote: > > Hi, > > Please help me getting rid of few confusions. > > I have setup two Cluster RHEL 4.0 Update2 Cluster Nodes.I installed > all the RPM packages manually. > "all the RPM packages" could you please list them. > > I tried running a simple script and killing few process of first > node,suddenly the other node took the reponsibilty and it was successful. > > I was running short of Shared Storage and Planned to setup iSCSI > target (Shared Storage) and the two cluster nodes(initiator). > > Now I want to setup GFS.Do I have to setup GFS both on Cluster Nodes > and Shared storage? > You need the GFS module on both the nodes, connect to the iSCSI target and then you will have to format the shared storage with GFS file system. Thanks Gowrishankar Rajaiyan From sdake at redhat.com Tue Jul 15 06:10:24 2008 From: sdake at redhat.com (Steven Dake) Date: Mon, 14 Jul 2008 23:10:24 -0700 Subject: [Linux-cluster] openais initscript in RHEL In-Reply-To: <487B77B4.3090703@sys-admin.hu> References: <487B77B4.3090703@sys-admin.hu> Message-ID: <1216102224.3663.1.camel@balance> On Mon, 2008-07-14 at 17:58 +0200, Laszlo BERES wrote: > Dear all, > > would you be so kind and tell me what is the exact purpose of the > openais initscript in RHEL? Enabling that took my whole day, having > horrible error messages and headache :] > If you intend to run openais as standalone then you want the init script enabled. If you intend to run cman, it will start the openais processes automatically as part of it's init script and you don't need to start openais with the init script in this case. regards -steve From rottmann at atix.de Tue Jul 15 07:44:23 2008 From: rottmann at atix.de (Reiner Rottmann) Date: Tue, 15 Jul 2008 09:44:23 +0200 Subject: [Linux-cluster] GFS volume filled to the brim - "No space left on device" although still data blocks free Message-ID: <200807150944.30832.rottmann@atix.de> Hello everyone, I've experienced strange behavior on a 20 GB GFS formatted volume (although same behaviour applies to smaller and larger sizes) when reaching the max available disk space by writing lots of 256 byte files in a nested directory structure (~15k files in one dir). The expected behaviour would be that all free data blocks are transformed to inodes and metadata as required but although there are still plenty datablocks free, new 256 byte files cannot be created due to "No space left on device". After that, when creating sequential files via touch, it is expected that they are created till all data blocks are transformed in inodes representing the files. When all data blocks are used, "No space left on device" is expected. But in this strange scenario, files are created at random!? Also when executing gfs_tool reclaim, new files are createable again. But gfs_tool reclaim only should increase the number of already available free data blocks by cleaning unused metadata blocks. In my understanding, it should not be necessary to reclaim blocks, if there are still free data blocks left. Has anyone an explanation for this? Best regards, Reiner Rottmann --%<--------------------------------------------------------------------------- (Filesystem filled with 256 byte files.) # for i in $(seq 1 1000); do touch waste.$i; done touch: cannot touch `waste.3': No space left on device touch: cannot touch `waste.6': No space left on device touch: cannot touch `waste.12': No space left on device touch: cannot touch `waste.13': No space left on device touch: cannot touch `waste.15': No space left on device touch: cannot touch `waste.16': No space left on device touch: cannot touch `waste.20': No space left on device touch: cannot touch `waste.25': No space left on device touch: cannot touch `waste.28': No space left on device touch: cannot touch `waste.29': No space left on device touch: cannot touch `waste.32': No space left on device touch: cannot touch `waste.37': No space left on device touch: cannot touch `waste.38': No space left on device touch: cannot touch `waste.39': No space left on device touch: cannot touch `waste.48': No space left on device touch: cannot touch `waste.55': No space left on device touch: cannot touch `waste.56': No space left on device touch: cannot touch `waste.59': No space left on device touch: cannot touch `waste.60': No space left on device touch: cannot touch `waste.63': No space left on device ^C # for i in $(seq 1 1000); do touch waste2.$i; done touch: cannot touch `waste2.1': No space left on device touch: cannot touch `waste2.8': No space left on device touch: cannot touch `waste2.10': No space left on device touch: cannot touch `waste2.11': No space left on device touch: cannot touch `waste2.12': No space left on device touch: cannot touch `waste2.14': No space left on device touch: cannot touch `waste2.17': No space left on device touch: cannot touch `waste2.19': No space left on device touch: cannot touch `waste2.21': No space left on device touch: cannot touch `waste2.24': No space left on device touch: cannot touch `waste2.28': No space left on device touch: cannot touch `waste2.31': No space left on device touch: cannot touch `waste2.32': No space left on device touch: cannot touch `waste2.33': No space left on device touch: cannot touch `waste2.40': No space left on device touch: cannot touch `waste2.43': No space left on device touch: cannot touch `waste2.44': No space left on device touch: cannot touch `waste2.49': No space left on device touch: cannot touch `waste2.54': No space left on device touch: cannot touch `waste2.55': No space left on device touch: cannot touch `waste2.57': No space left on device touch: cannot touch `waste2.58': No space left on device touch: cannot touch `waste2.61': No space left on device ^C # gfs_tool df . /mnt/gfstest: SB lock proto = "lock_dlm" SB lock table = "axqa01:gfstest" SB ondisk format = 1309 SB multihost format = 1401 Block size = 1024 Journals = 3 Resource Groups = 78 Mounted lock proto = "lock_dlm" Mounted lock table = "axqa01:gfstest" Mounted host data = "" Journal number = 0 Lock module flags = Local flocks = FALSE Local caching = FALSE Oopses OK = FALSE Type Total Used Free use% ------------------------------------------------------------------------ inodes 18343309 18343309 0 100% metadata 1690156 1687524 2632 100% data 43931 0 43931 0% # rpm -qa | grep -e 'GFS\|cman\|magma\|ccs'|sort GFS-6.1.15-1 GFS-kernel-2.6.9-60.9 GFS-kernel-2.6.9-75.11 GFS-kernel-smp-2.6.9-60.9 GFS-kernel-smp-2.6.9-75.11 ccs-1.0.11-1 cman-1.0.17-0.el4_6.3 cman-kernel-smp-2.6.9-45.15 cman-kernel-smp-2.6.9-53.8 magma-1.0.8-1 magma-devel-1.0.8-1 magma-plugins-1.0.12-0 # cat /etc/redhat-release Red Hat Enterprise Linux AS release 4 (Nahant Update 6) # uname -a Linux realserver10 2.6.9-67.0.4.ELsmp #1 SMP Fri Jan 18 05:00:00 EST 2008 x86_64 x86_64 x86_64 GNU/Linux --%<--------------------------------------------------------------------------- -- Gruss / Regards, Dipl.-Ing. (FH) Reiner Rottmann Phone: +49-89 452 3538-12 http://www.atix.de/ http://open-sharedroot.org/ PGP Key ID: 0xCA67C5A6 PGP Key Fingerprint = BF59FF006360B6E8D48F26B10D9F5A84CA67C5A6 ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany Phone: +49-89 452 3538-0 Fax: ? +49-89 990 1766-0 Registergericht: Amtsgericht Muenchen Registernummer: HRB 168930 USt.-Id.: DE209485962 Vorstand: Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) Vorsitzender des Aufsichtsrats: Dr. Martin Buss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part. URL: From andreas.schneider at f-it.biz Tue Jul 15 07:45:35 2008 From: andreas.schneider at f-it.biz (andreas.schneider at f-it.biz) Date: Tue, 15 Jul 2008 09:45:35 +0200 Subject: [Linux-cluster] Abwesenheits-Notiz In-Reply-To: <200807150944.30832.rottmann@atix.de> Message-ID: Hallo und vielen Dank f?r Ihre eMail. Ich bin au?er Haus und kann Ihre Anfrage voraussichtlich bis 18.07.2008 nicht bearbeiten. Ihre eMail wird aus Gr?nden der Vertraulichkeit nicht weitergeleitet. Hello and thanks for your email. I'm out of the office and will not be able to answer your request personally until July 18, 2008. Regarding confidentiality, your email is not forwarded in the meantime. Mit freundlichen Gr??en / Best regards, Andreas Schneider F-IT Gesellschaft f?r IT-Governance mbH Lohnerhofstr. 2 78467 Konstanz Fon: +49 7531 81996-0 Fax: +49 7531 81996-19 From beres.laszlo at sys-admin.hu Tue Jul 15 08:23:47 2008 From: beres.laszlo at sys-admin.hu (Laszlo BERES) Date: Tue, 15 Jul 2008 10:23:47 +0200 Subject: [Linux-cluster] openais initscript in RHEL In-Reply-To: <1216102224.3663.1.camel@balance> References: <487B77B4.3090703@sys-admin.hu> <1216102224.3663.1.camel@balance> Message-ID: <487C5E93.4090104@sys-admin.hu> Steven Dake wrote: > If you intend to run openais as standalone then you want the init script > enabled. > > If you intend to run cman, it will start the openais processes > automatically as part of it's init script and you don't need to start > openais with the init script in this case. Steven, thank you for your answer. Can you imagine that Conga somehow enables it? Because I could reproduce in a RHEL5.2 environment: installed two new nodes with cluster components, started Conga, created a very simple cluster and after rebooting the cman threw the error below: Jul 14 12:26:36 hurka1 openais[6093]: [SYNC ] Not using a virtual synchrony filter. Jul 14 12:26:36 hurka1 openais[6093]: [MAIN ] ERROR: Could not bind AF_UNIX: Address already in use. Jul 14 12:26:36 hurka1 openais[6093]: [MAIN ] AIS Executive exiting (reason: could not bind to an address). It's clear that openais was chkconfig'ed and started but I don't know why. -- Laszlo BERES RHCE, RHCX senior IT engineer, trainer From ajeet.singh.raina at logica.com Tue Jul 15 09:23:12 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Tue, 15 Jul 2008 14:53:12 +0530 Subject: [Linux-cluster] Cluster through Opforce? In-Reply-To: <487C3BCD.9000609@gmail.com> Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17977@in-ex004.groupinfra.com> Anyone Who has installed Cluster through Veritas Opforce? This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. From fdinitto at redhat.com Tue Jul 15 11:28:41 2008 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Tue, 15 Jul 2008 13:28:41 +0200 (CEST) Subject: [Linux-cluster] Cluster 2.99.06 (development snapshot) released Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 The cluster team and its community are proud to announce the 7th release from the master branch: 2.99.06. The 2.99.XX releases are _NOT_ meant to be used for production environments.. yet. You have been warned: *this code will have no mercy* for your servers and your data. The master branch is the main development tree that receives all new features, code, clean up and a whole brand new set of bugs, At some point in time this code will become the 3.0 stable release. Everybody with test equipment and time to spare, is highly encouraged to download, install and test the 2.99 releases and more important report problems. In order to build the 2.99.06 release you will need: - - openais latest checkout from SVN (r1579 or higher) - - linux kernel 2.6.26 from http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git (but can run on 2.6.25 in compatibility mode) NOTE to packagers: the library API/ABI's are _NOT_ stable (hence 2.9). We are still shipping shared libraries but remember that they can change anytime without warning. A bunch of new shared libraries have been added. The new source tarball can be downloaded here: ftp://sources.redhat.com/pub/cluster/releases/cluster-2.99.06.tar.gz In order to use GFS1, the Linux kernel requires a minimal patch: ftp://sources.redhat.com/pub/cluster/releases/lockproto-exports.patch To report bugs or issues: https://bugzilla.redhat.com/ Would you like to meet the cluster team or members of its community? Join us on IRC (irc.freenode.net #linux-cluster) and share your experience with other sysadministrators or power users. Happy clustering, Fabio Under the hood (from 2.99.05): Benjamin Marzinski (1): [gnbd-kernel] bz 442606: Switch gnbd to use deadline scheduler by default. Bob Peterson (1): gfs2_fsck fails: Unable to read in jindex inode. Christine Caulfield (12): [CONFIG] Add ldap configurator [CONFIG] Make ldap put totem in the right place [CONFIG] Improve LDAP error reporting [CMAN] Add a config update callback [CMAN] Only do timestamp check for older nodes. [CMAN] Fix logging options [CMAN] Remove some redundant code. [CONFIG] Add some more ldap comments [CONFIG] Add ldap loader [CONFIG] rename ldap config generator [CONFIG] Add a man page for confdb2ldif [CMAN] Remove some spurious prints David Teigland (16): gfs_controld: basic fixes fenced: revert logsys commits fenced: use logsys fence_node: use simple logsys api fenced/fence_node: use SYSLOGLEVEL fenced: link with liblogsys gfs_controld: support queries from gfs_control gfs_controld: add query code gfs_controld: add journal for new node fenced/dlm_controld/gfs_controld: ccs/cman setup fenced/dlm_controld: fix quorum waiting fenced: tune logsys settings groupd: sync daemon setup/structure with others fenced: enable new logsys mode flag fenced: fix logsys define dlm_controld: set id before recovery Fabio M. Di Nitto (35): [FENCE] Start porting fenced to logsys [FENCE] Make fenced ready to load logsys config [FENCE] Move logsys configuration calls where they belong [CCS] Set debug from syslog_level only when requested [QDISK] Set debug from syslog_level only when requested [FENCE] Allow fenced to configure logsys [FENCE] fenced: separate concept of fork and debugging [CCS] Use common syslog facility [FENCE] fence_node: use logsys for logging to syslog [CMAN] Remove unrequired includes [FENCE] fenced: update man page [GFS2] hexedit does not need syslog [FENCE] fence_tool: document "ls" [CCS] Remove duplicate header [CONFIG] Make sure to reset xml index in not in list mode [CONFIG] Add cluster.conf direct loader [CONFIG] Fix several bugs in XML parsing implementations [BUILD] Add configure options for libldap [BUILD] Allow configuration of docdir [BUILD] Fix docdir default path [BUILD] Add install/uninstall snippets for documents [BUILD] Install ldap schemas and example in document directory [MISC] Documentation cleanup [BUILD] Fix install of telnet_ssl [BUILD] Fix telnet_ssl build [BUILD] Allow users to configure default built-in syslog level [MISC] Use default configured SYSLOGLEVEL across the tree [BUILD] Add make oldconfig target [MISC] Update .gitignore [MISC] Fix logging file query [CONFIG] Fix loadldap include [BUILD] Plug confdb to ldap tool [MISC] Create and install logrotate file [BUILD] Clean extra kernel modules files [MISC] Fix build with newer toolchain Lon Hohberger (6): Revert "[fence] fence_xvmd: Add KVM support; misc cleanups." [fence] fence_xvmd: Add KVM support; misc cleanups. [rgmanager] Fix erroneous broadcast matching in ip.sh [fence] Port XVM to logsys [fence] Fix XVM's debug.c default [fence] Make fence_xvm[d] use normal log levels Marek 'marx' Grac (1): [FENCE] Bug #448822: fence_ilo doesn't work with iLO root (1): [fence] fence_xvmd: Add KVM support; misc cleanups. .gitignore | 4 + COPYING.applications | 339 ----------------------- COPYING.libraries | 510 ----------------------------------- COPYRIGHT | 232 ---------------- Makefile | 11 +- README.licence | 33 --- ccs/ccsais/config.c | 2 +- ccs/daemon/ccsd.c | 2 +- ccs/daemon/misc.c | 19 +- cman/daemon/ais.c | 27 +-- cman/daemon/barrier.c | 2 +- cman/daemon/cman-preconfig.c | 100 +++++--- cman/daemon/cman.h | 3 + cman/daemon/cmanconfig.c | 3 +- cman/daemon/cnxman-private.h | 3 - cman/daemon/cnxman-socket.h | 1 + cman/daemon/commands.c | 8 +- cman/daemon/daemon.c | 1 - cman/daemon/logging.c | 2 +- cman/daemon/logging.h | 4 +- cman/lib/libcman.h | 8 +- cman/qdisk/daemon_init.c | 2 +- cman/qdisk/disk.c | 2 +- cman/qdisk/disk_util.c | 2 +- cman/qdisk/main.c | 27 ++- cman/qdisk/mkqdisk.c | 2 +- cman/qdisk/proc.c | 2 +- cman/qdisk/score.c | 2 +- config/Makefile | 2 +- config/libs/libccsconfdb/libccs.c | 13 +- config/plugins/Makefile | 4 + config/plugins/ldap/99cluster.ldif | 138 ++++++++++ config/plugins/ldap/Makefile | 29 ++ config/plugins/ldap/configldap.c | 298 ++++++++++++++++++++ config/plugins/ldap/example.ldif | 137 ++++++++++ config/plugins/xml/Makefile | 27 ++ config/plugins/xml/config.c | 298 ++++++++++++++++++++ config/tools/Makefile | 2 +- config/tools/ldap/Makefile | 29 ++ config/tools/ldap/confdb2ldif.c | 211 +++++++++++++++ config/tools/man/Makefile | 2 +- config/tools/man/confdb2ldif.8 | 64 +++++ configure | 43 +++ doc/COPYING.applications | 339 +++++++++++++++++++++++ doc/COPYING.libraries | 510 +++++++++++++++++++++++++++++++++++ doc/COPYRIGHT | 232 ++++++++++++++++ doc/Makefile | 26 ++ doc/README.licence | 33 +++ doc/cluster.logrotate.in | 8 + fence/agents/ilo/fence_ilo.py | 99 +++++--- fence/agents/lib/Makefile | 2 +- fence/agents/lib/fencing.py.py | 10 +- fence/agents/lib/telnet_ssl.py | 72 +++++ fence/agents/xvm/Makefile | 4 +- fence/agents/xvm/debug.c | 3 + fence/agents/xvm/debug.h | 4 +- fence/agents/xvm/fence_xvm.c | 49 +++- fence/agents/xvm/fence_xvmd.c | 386 ++++++++++++++++++++------ fence/agents/xvm/ip_lookup.c | 2 + fence/agents/xvm/mcast.c | 2 + fence/agents/xvm/options.c | 74 ++++-- fence/agents/xvm/options.h | 1 + fence/agents/xvm/simple_auth.c | 1 + fence/agents/xvm/tcp.c | 2 + fence/agents/xvm/virt.c | 11 +- fence/agents/xvm/xml.c | 5 +- fence/agents/xvm/xvm.h | 4 + fence/fence_node/Makefile | 3 +- fence/fence_node/fence_node.c | 19 +- fence/fence_tool/fence_tool.c | 1 + fence/fenced/Makefile | 5 +- fence/fenced/config.c | 77 ++++-- fence/fenced/cpg.c | 17 +- fence/fenced/fd.h | 31 ++- fence/fenced/fenced.h | 4 +- fence/fenced/group.c | 4 +- fence/fenced/logging.c | 167 ++++++++++++ fence/fenced/main.c | 39 ++- fence/fenced/member_cman.c | 34 ++- fence/fenced/recover.c | 22 +- fence/man/fence_tool.8 | 2 +- fence/man/fenced.8 | 2 +- gfs2/edit/hexedit.c | 2 - gfs2/libgfs2/super.c | 1 + gnbd-kernel/src/gnbd.c | 2 +- group/daemon/Makefile | 3 +- group/daemon/cman.c | 65 +++-- group/daemon/cpg.c | 6 +- group/daemon/gd_internal.h | 53 ++-- group/daemon/groupd.h | 1 + group/daemon/main.c | 291 ++++++++++----------- group/dlm_controld/action.c | 72 +++--- group/dlm_controld/config.c | 101 ++++---- group/dlm_controld/config.h | 8 +- group/dlm_controld/cpg.c | 24 ++- group/dlm_controld/dlm_daemon.h | 14 +- group/dlm_controld/group.c | 5 + group/dlm_controld/main.c | 67 +++--- group/dlm_controld/member_cman.c | 41 ++- group/gfs_control/main.c | 224 +++++++++++++++- group/gfs_controld/config.c | 117 ++++----- group/gfs_controld/cpg-new.c | 149 +++++++---- group/gfs_controld/gfs_daemon.h | 20 ++- group/gfs_controld/group.c | 26 ++ group/gfs_controld/main.c | 175 +++++++++++-- group/gfs_controld/member_cman.c | 52 +++-- group/libgfscontrol/libgfscontrol.h | 24 ++- group/tool/Makefile | 8 +- group/tool/main.c | 393 +++++++++++++++++++-------- make/clean.mk | 3 +- make/defines.mk.input | 8 +- make/install.mk | 8 + make/uninstall.mk | 6 + rgmanager/src/resources/ip.sh | 2 +- 114 files changed, 4831 insertions(+), 2096 deletions(-) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) iQIVAwUBSHyJ7wgUGcMLQ3qJAQJNvQ//aofPJqX8MDsTujI6PqgeM/d39pG6a5hp 63DwhkJ3Cdp49bT9qZpw7RoYkiZdgkqmgXSf98jVKsDnTq/YYlpMDNFeY9l1D0tG gMfMiramLzfxqgib2N+KSiA+8qxID5jObHnh438LQivt2+35gAIwokiqdAqXv4Ec 90bZF9JvaU29IJq0DQk7r2Sfe4lFdS6Ys+1rqv2oaJaL9opebv3w30ItcUJBseIn v0bXTnKIWFKAFaCG9t7+p0FlBgsTL2Ubj0cQH8nR9yZbYfCjjI1HOmavPMnj2Oxw 83j0IujlNSPLYIIUDC/UKg24RRH1edNasU/he4JViYcxtq/O5V9E9kb/Tl+YlX6c Y8anyL3TTjvZES5koj7ydXroQFIx8yHbTSVs6z32NVa9sIXJj18D/wl4cjT/MgP/ LOmb4m9w4bSBD8TA05a0t5N9kcvfUGMEWVyJ0JuAr2VLdgPtcclOsYEJU6oAjBGK PN3jwrEreyoXqSZ5+dVAwlVSJrzboPjNMMOl0pxhMjOgrfajw5WWihKPFA08xnD6 w/JC0Hwsmv2QXVu5WKBZ9YymhLt/mK5xlm38jre0hB9aUo0B+LA66XAtx2+6V5+3 qFEmAqMEtJxfn/2ctpeYGqgNECvYlnBHoNZ+R8EB9VtUB8XxdfanBXszQM1yAH0G W/hjHsnpLIg= =/D6I -----END PGP SIGNATURE----- From gusti99 at integra.com.py Tue Jul 15 20:35:49 2008 From: gusti99 at integra.com.py (Gusti Gonzalez) Date: Tue, 15 Jul 2008 16:35:49 -0400 Subject: [Linux-cluster] Re: GFS Shutdown (dlm: gfs1: remove fr 0 ID 2) In-Reply-To: <87bq5d7jm5.fsf@alamut.mobiliz.com.tr> References: <87bq5d7jm5.fsf@alamut.mobiliz.com.tr> Message-ID: Volkan YAZICI escribi?: > Hi, > > During machine reboot/shutdown, process halts after closing crond and > trying to close GFS service. When I sit infront of the screen, I see > display is filled with > > dlm: gfs1: remove fr 0 ID 2 > > lines. What might have caused this? > > > Regards. > I am also experiencing these messages. I see that nobody replied to this thread. Did you manage to find out what was causing this behavior? Best regards, Gustavo. My installation is: CentOS 5.1 kernel 2.6.18-92.1.6.el5PAE GFS 0.1.23-5.el5 From jerlyon at gmail.com Tue Jul 15 21:13:33 2008 From: jerlyon at gmail.com (Jeremy Lyon) Date: Tue, 15 Jul 2008 15:13:33 -0600 Subject: [Linux-cluster] Cluster starts, but a node won't rejoin after reboot In-Reply-To: <779919740805291136i166b37ado2d2d4b21112cbbfe@mail.gmail.com> References: <779919740805221003k5b799927qfc0c11f65e1bf340@mail.gmail.com> <3DDA6E3E456E144DA3BB0A62A7F7F779020C6285@SKYHQAMX08.klasi.is> <779919740805291136i166b37ado2d2d4b21112cbbfe@mail.gmail.com> Message-ID: <779919740807151413ld95365fk8af9a0073d7cecb2@mail.gmail.com> Hi everyone, I wanted to post the fix that we found for this issue. The problem was that RHEL 5.x (3, and 4 too) uses IGMPv3 by default and our network is only using IGMPv2. The server would send out an IGMPv3 packet that was ignored by the network and would not actually get to join any multicast groups until the network devices would send out a broadcast to see if any host wanted to join a multicast group. I added the following to the sysctl.conf's of each node in the cluster and this issue has gone away. # Force IGMPv2 due to Network environment net.ipv4.conf.default.force_igmp_version = 2 net.ipv4.conf.all.force_igmp_version = 2 -Jeremy On Thu, May 29, 2008 at 12:36 PM, Jeremy Lyon wrote: > > I'm having the exact same issue on a RHEL 5.2 system, and have a open >> support case with Redhat. When it will be resolved i can post the details >> .... >> > Any word on this? I think I may get my own case going. Do you know if a > bugzilla got assigned to this? > > Thanks! > Jeremy > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From theophanis_kontogiannis at yahoo.gr Tue Jul 15 21:32:34 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Wed, 16 Jul 2008 00:32:34 +0300 Subject: [Linux-cluster] Issue after gfs2 tools upgrade In-Reply-To: <011b01c8e600$13dc43d0$3b94cb70$@gr> References: <011b01c8e600$13dc43d0$3b94cb70$@gr> Message-ID: <017801c8e6c2$4d94f230$e8bed690$@gr> Hello all again. All efforts to mount my file system fail, as well as fsck. Any ideas on how to correct jid 0?? Thank you all for your time, Theophanis Kontogiannis From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Theophanis Kontogiannis Sent: Tuesday, July 15, 2008 1:22 AM To: 'linux clustering' Subject: [Linux-cluster] Issue after gfs2 tools upgrade Hello all I have 5.2 with 2.6.18-92.1.6.el5.centos.plus running for some time with drbd 8.2 Two days ago I made an upgrade to gfs2-utils-0.1.44-1.el5_2.1 Right after the upgrade when trying to mount, I started getting for my gfs2 (running on LV, over VG, over PV over DRBD): GFS2: fsid=: Trying to join cluster "lock_dlm", "tweety:gfs2-00" GFS2: fsid=tweety:gfs2-00.0: Joined cluster. Now mounting FS... GFS2: fsid=tweety:gfs2-00.0: jid=0, already locked for use GFS2: fsid=tweety:gfs2-00.0: jid=0: Looking at journal... GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = fs/gfs2/recovery.c, line = 239 GFS2: fsid=tweety:gfs2-00.0: about to withdraw this file system GFS2: fsid=tweety:gfs2-00.0: telling LM to withdraw dlm: closing connection to node 2 Trying to gfs2_fsck -vy /dev/mapper/vg0-data0 gives: Initializing fsck Initializing lists... Recovering journals (this may take a while)jid=0: Looking at journal... jid=0: Failed jid=1: Looking at journal... jid=1: Journal is clean. jid=2: Looking at journal... jid=2: Journal is clean. jid=3: Looking at journal... jid=3: Journal is clean. jid=4: Looking at journal... jid=4: Journal is clean. jid=5: Looking at journal... jid=5: Journal is clean. jid=6: Looking at journal... jid=6: Journal is clean. jid=7: Looking at journal... jid=7: Journal is clean. jid=8: Looking at journal... jid=8: Journal is clean. jid=9: Looking at journal... jid=9: Journal is clean. Journal recovery complete. Initializing special inodes... Validating Resource Group index. Level 1 RG check. (level 1 passed) 1392 resource groups found. Setting block ranges... Starting pass1 Checking metadata in Resource Group #0 Checking metadata in Resource Group #1 Checking metadata in Resource Group #2 .................... Checking metadata in Resource Group #1391 Pass1 complete Checking system inode 'master' System inode for 'master' is located at block 23 (0x17) Checking system inode 'root' System inode for 'root' is located at block 22 (0x16) Checking system inode 'inum' System inode for 'inum' is located at block 330990 (0x50cee) Checking system inode 'statfs' System inode for 'statfs' is located at block 330991 (0x50cef) Checking system inode 'jindex' System inode for 'jindex' is located at block 24 (0x18) Checking system inode 'rindex' System inode for 'rindex' is located at block 330992 (0x50cf0) Checking system inode 'quota' System inode for 'quota' is located at block 331026 (0x50d12) Checking system inode 'per_node' System inode for 'per_node' is located at block 328392 (0x502c8) Starting pass1b Looking for duplicate blocks... No duplicate blocks found Pass1b complete Starting pass1c Looking for inodes containing ea blocks... Pass1c complete Starting pass2 Checking system directory inode 'jindex' Checking system directory inode 'per_node' Checking system directory inode 'master' Checking system directory inode 'root' Checking directory inodes. Pass2 complete Starting pass3 Marking root inode connected Marking master directory inode connected Checking directory linkage. Pass3 complete Starting pass4 Checking inode reference counts. Pass4 complete Starting pass5 Verifying Resource Group #0 Verifying Resource Group #1 Verifying Resource Group #2 Verifying Resource Group #3 Verifying Resource Group #4 ............... Verifying Resource Group #1388 Verifying Resource Group #1389 Verifying Resource Group #1390 Verifying Resource Group #1391 Pass5 complete Writing changes to disk Syncing the device. Freeing buffers. gfs2_fsck complete Trying to mount again the fs I get the same error. Any ideas on the issue? Thank you all for your time. Theophanis Kontogiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiagocruz at forumgdh.net Wed Jul 16 13:56:18 2008 From: tiagocruz at forumgdh.net (Tiago Cruz) Date: Wed, 16 Jul 2008 10:56:18 -0300 Subject: [Linux-cluster] Jornals on GFS Message-ID: <1216216578.8375.78.camel@tuxkiller.ig.com.br> Is there some problem if I format my GFS with, for example, 10 jornals if I'm using only 3? 'Cause we never know about the future... :-) Thanks -- Tiago Cruz http://everlinux.com Linux User #282636 From harry.sutton at hp.com Wed Jul 16 14:11:32 2008 From: harry.sutton at hp.com (Sutton, Harry (MSE)) Date: Wed, 16 Jul 2008 10:11:32 -0400 Subject: [Linux-cluster] Jornals on GFS In-Reply-To: <1216216578.8375.78.camel@tuxkiller.ig.com.br> References: <1216216578.8375.78.camel@tuxkiller.ig.com.br> Message-ID: <487E0194.2040001@hp.com> Tiago Cruz wrote: > Is there some problem if I format my GFS with, for example, 10 > jornals if I'm using only 3? 'Cause we never know about the > future... :-) > > Thanks > > -- > Tiago Cruz > http://everlinux.com > Linux User #282636 > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > > You're better off having more than you think you'll need, because once you use them up you can't add more without recreating / reformatting the filesystem. There are no problems I know of in creating even a multiple of initially-unused journals (other than the relatively small space they take up.) /Harry Sutton, RHCA -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 6268 bytes Desc: S/MIME Cryptographic Signature URL: From hlawatschek at atix.de Wed Jul 16 18:46:54 2008 From: hlawatschek at atix.de (Mark Hlawatschek) Date: Wed, 16 Jul 2008 20:46:54 +0200 Subject: [Linux-cluster] NFS over GFS issue Message-ID: <200807162046.55198.hlawatschek@atix.de> Hi, During some stress tests with NFS over GFS, I observed a strange problem. The test setup consists of two GFS cluster nodes (node1, node2) (RHEL4.6), both serving the same NFS exports (/mnt/gfstest) The NFS exports are mounted by two NFS clients (client1, client2), whereas client1 has mounted the NFS export from node1 and client2 has mounted the NFS export from node2. During the stress test, client1 creates files into dir1 on the GFS and client2 created files into dir2 on the same GFS. Node1 continuously reads the files created by client1 and client2. After some time (about 10 minutes) the following error occurs on node1: GFS: fsid=axqa01:gfstest.0: fatal: assertion "!bd->bd_pinned && !buffer_busy(bh)" failed GFS: fsid=axqa01:gfstest.0: function = ail_empty_gl GFS: fsid=axqa01:gfstest.0: file = /builddir/build/BUILD/gfs-kernel-2.6.9-75/smp/src/gfs/dio.c, line = 383 GFS: fsid=axqa01:gfstest.0: time = 1216216523 GFS: fsid=axqa01:gfstest.0: about to withdraw from the cluster GFS: fsid=axqa01:gfstest.0: waiting for outstanding I/O GFS: fsid=axqa01:gfstest.0: telling LM to withdraw lock_dlm: withdraw abandoned memory GFS: fsid=axqa01:gfstest.0: withdrawn Is there a workaround for this problem ? Is this a bug ? Thanks, Mark -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany From kanderso at redhat.com Wed Jul 16 18:56:19 2008 From: kanderso at redhat.com (Kevin Anderson) Date: Wed, 16 Jul 2008 13:56:19 -0500 Subject: [Linux-cluster] NFS over GFS issue In-Reply-To: <200807162046.55198.hlawatschek@atix.de> References: <200807162046.55198.hlawatschek@atix.de> Message-ID: <1216234580.4070.30.camel@dhcp80-204.msp.redhat.com> Mark, Looks very similar to a bug that has been recently reported. Unfortunately the bug has a number of internal view only flags, so can't enable it for external access. Your report is the first one to mention NFS. Is it possible to get your test cases? Thanks Kevin On Wed, 2008-07-16 at 20:46 +0200, Mark Hlawatschek wrote: > Hi, > > During some stress tests with NFS over GFS, I observed a strange problem. > > The test setup consists of two GFS cluster nodes (node1, node2) (RHEL4.6), > both serving the same NFS exports (/mnt/gfstest) > The NFS exports are mounted by two NFS clients (client1, client2), whereas > client1 has mounted the NFS export from node1 and client2 has mounted the NFS > export from node2. > > During the stress test, client1 creates files into dir1 on the GFS and client2 > created files into dir2 on the same GFS. Node1 continuously reads the files > created by client1 and client2. After some time (about 10 minutes) the > following error occurs on node1: > > GFS: fsid=axqa01:gfstest.0: fatal: assertion "!bd->bd_pinned > && !buffer_busy(bh)" failed > GFS: fsid=axqa01:gfstest.0: function = ail_empty_gl > GFS: fsid=axqa01:gfstest.0: file > = /builddir/build/BUILD/gfs-kernel-2.6.9-75/smp/src/gfs/dio.c, line = 383 > GFS: fsid=axqa01:gfstest.0: time = 1216216523 > GFS: fsid=axqa01:gfstest.0: about to withdraw from the cluster > GFS: fsid=axqa01:gfstest.0: waiting for outstanding I/O > GFS: fsid=axqa01:gfstest.0: telling LM to withdraw > lock_dlm: withdraw abandoned memory > GFS: fsid=axqa01:gfstest.0: withdrawn > > Is there a workaround for this problem ? Is this a bug ? > > Thanks, > > Mark > From adas at redhat.com Wed Jul 16 19:01:59 2008 From: adas at redhat.com (Abhijith Das) Date: Wed, 16 Jul 2008 14:01:59 -0500 Subject: [Linux-cluster] NFS over GFS issue In-Reply-To: <200807162046.55198.hlawatschek@atix.de> References: <200807162046.55198.hlawatschek@atix.de> Message-ID: <487E45A7.1040100@redhat.com> Mark Hlawatschek wrote: > Hi, > > During some stress tests with NFS over GFS, I observed a strange problem. > > The test setup consists of two GFS cluster nodes (node1, node2) (RHEL4.6), > both serving the same NFS exports (/mnt/gfstest) > The NFS exports are mounted by two NFS clients (client1, client2), whereas > client1 has mounted the NFS export from node1 and client2 has mounted the NFS > export from node2. > > During the stress test, client1 creates files into dir1 on the GFS and client2 > created files into dir2 on the same GFS. Node1 continuously reads the files > created by client1 and client2. After some time (about 10 minutes) the > following error occurs on node1: > > GFS: fsid=axqa01:gfstest.0: fatal: assertion "!bd->bd_pinned > && !buffer_busy(bh)" failed > GFS: fsid=axqa01:gfstest.0: function = ail_empty_gl > GFS: fsid=axqa01:gfstest.0: file > = /builddir/build/BUILD/gfs-kernel-2.6.9-75/smp/src/gfs/dio.c, line = 383 > GFS: fsid=axqa01:gfstest.0: time = 1216216523 > GFS: fsid=axqa01:gfstest.0: about to withdraw from the cluster > GFS: fsid=axqa01:gfstest.0: waiting for outstanding I/O > GFS: fsid=axqa01:gfstest.0: telling LM to withdraw > lock_dlm: withdraw abandoned memory > GFS: fsid=axqa01:gfstest.0: withdrawn > > Is there a workaround for this problem ? Is this a bug ? > > Thanks, > > Mark > > This is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=445000). We were hitting this occasionally in our testing, but not frequently enough to help us debug it. I'm going to try your test case to see if I can recreate the problem reliably. Thanks! --Abhi From hlawatschek at atix.de Thu Jul 17 08:07:54 2008 From: hlawatschek at atix.de (Mark Hlawatschek) Date: Thu, 17 Jul 2008 10:07:54 +0200 Subject: [Linux-cluster] NFS over GFS issue In-Reply-To: <1216234580.4070.30.camel@dhcp80-204.msp.redhat.com> References: <200807162046.55198.hlawatschek@atix.de> <1216234580.4070.30.camel@dhcp80-204.msp.redhat.com> Message-ID: <200807171007.54513.hlawatschek@atix.de> Kevin, I created a new bugzilla entry (https://bugzilla.redhat.com/show_bug.cgi?id=455696) to track the problem. Mark On Wednesday 16 July 2008 20:56:19 Kevin Anderson wrote: > Mark, > > Looks very similar to a bug that has been recently reported. > Unfortunately the bug has a number of internal view only flags, so can't > enable it for external access. Your report is the first one to mention > NFS. Is it possible to get your test cases? > > Thanks > Kevin > > On Wed, 2008-07-16 at 20:46 +0200, Mark Hlawatschek wrote: > > Hi, > > > > During some stress tests with NFS over GFS, I observed a strange problem. > > > > The test setup consists of two GFS cluster nodes (node1, node2) > > (RHEL4.6), both serving the same NFS exports (/mnt/gfstest) > > The NFS exports are mounted by two NFS clients (client1, client2), > > whereas client1 has mounted the NFS export from node1 and client2 has > > mounted the NFS export from node2. > > > > During the stress test, client1 creates files into dir1 on the GFS and > > client2 created files into dir2 on the same GFS. Node1 continuously reads > > the files created by client1 and client2. After some time (about 10 > > minutes) the following error occurs on node1: > > > > GFS: fsid=axqa01:gfstest.0: fatal: assertion "!bd->bd_pinned > > && !buffer_busy(bh)" failed > > GFS: fsid=axqa01:gfstest.0: function = ail_empty_gl > > GFS: fsid=axqa01:gfstest.0: file > > = /builddir/build/BUILD/gfs-kernel-2.6.9-75/smp/src/gfs/dio.c, line = 383 > > GFS: fsid=axqa01:gfstest.0: time = 1216216523 > > GFS: fsid=axqa01:gfstest.0: about to withdraw from the cluster > > GFS: fsid=axqa01:gfstest.0: waiting for outstanding I/O > > GFS: fsid=axqa01:gfstest.0: telling LM to withdraw > > lock_dlm: withdraw abandoned memory > > GFS: fsid=axqa01:gfstest.0: withdrawn > > > > Is there a workaround for this problem ? Is this a bug ? > > > > Thanks, > > > > Mark > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- Gruss / Regards, Dipl.-Ing. Mark Hlawatschek http://www.atix.de/ http://www.open-sharedroot.org/ ** ATIX Informationstechnologie und Consulting AG Einsteinstr. 10 85716 Unterschleissheim Deutschland/Germany From carlopmart at gmail.com Thu Jul 17 11:37:52 2008 From: carlopmart at gmail.com (carlopmart) Date: Thu, 17 Jul 2008 13:37:52 +0200 Subject: [Linux-cluster] Configuring only one node tu use gfs2 Message-ID: <487F2F10.4060105@gmail.com> Hi all, I need to setup a GFS2 partition to store vmachines but only in one host (rhel5.2). How can I configure GFS2 do this? Do I need to install cluster suite?? Thanks. -- CL Martinez carlopmart {at} gmail {d0t} com From rpeterso at redhat.com Thu Jul 17 19:12:38 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Thu, 17 Jul 2008 14:12:38 -0500 Subject: [Linux-cluster] Jornals on GFS In-Reply-To: <487E0194.2040001@hp.com> References: <1216216578.8375.78.camel@tuxkiller.ig.com.br> <487E0194.2040001@hp.com> Message-ID: <1216321958.24666.2.camel@technetium.msp.redhat.com> On Wed, 2008-07-16 at 10:11 -0400, Sutton, Harry (MSE) wrote: > Tiago Cruz wrote: > > Is there some problem if I format my GFS with, for example, 10 > > jornals if I'm using only 3? 'Cause we never know about the > > future... :-) > > > > Thanks > of initially-unused journals (other than the relatively small space they > take up.) Hi, That "relatively small space they take up" is 128MB for each journal by default. So if you format for 30 journals, that's 3.75GB, if that's acceptable to you. Regards, Bob Peterson Red Hat Clustering & GFS From rpeterso at redhat.com Thu Jul 17 20:46:45 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Thu, 17 Jul 2008 15:46:45 -0500 Subject: [Linux-cluster] Configuring only one node tu use gfs2 In-Reply-To: <487F2F10.4060105@gmail.com> References: <487F2F10.4060105@gmail.com> Message-ID: <1216327605.24666.8.camel@technetium.msp.redhat.com> On Thu, 2008-07-17 at 13:37 +0200, carlopmart wrote: > Hi all, > > I need to setup a GFS2 partition to store vmachines but only in one host > (rhel5.2). How can I configure GFS2 do this? Do I need to install cluster suite?? > > Thanks. Hi, All you need to do is something like this: mkfs.gfs2 -j 1 -p lock_nolock /dev/your/device Regards, Bob Peterson Red Hat Clustering & GFS From garromo at us.ibm.com Thu Jul 17 22:30:42 2008 From: garromo at us.ibm.com (Gary Romo) Date: Thu, 17 Jul 2008 16:30:42 -0600 Subject: [Linux-cluster] Configuring only one node tu use gfs2 In-Reply-To: <1216327605.24666.8.camel@technetium.msp.redhat.com> Message-ID: Do you need to install cluster suite in order to use gfs? Gary Romo Bob Peterson To Sent by: linux clustering linux-cluster-bou nces at redhat.com cc Subject 07/17/2008 02:46 Re: [Linux-cluster] Configuring PM only one node tu use gfs2 Please respond to rpeterso at redhat.c om; Please respond to linux clustering On Thu, 2008-07-17 at 13:37 +0200, carlopmart wrote: > Hi all, > > I need to setup a GFS2 partition to store vmachines but only in one host > (rhel5.2). How can I configure GFS2 do this? Do I need to install cluster suite?? > > Thanks. Hi, All you need to do is something like this: mkfs.gfs2 -j 1 -p lock_nolock /dev/your/device Regards, Bob Peterson Red Hat Clustering & GFS -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pic27802.gif Type: image/gif Size: 1255 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From sunhux at gmail.com Fri Jul 18 05:13:04 2008 From: sunhux at gmail.com (sunhux G) Date: Fri, 18 Jul 2008 13:13:04 +0800 Subject: [Linux-cluster] VMWare ESX or RHEL or HP Gbe2c switch problem? Message-ID: <60f08e700807172213w2427c131i8e0159e53f34493e@mail.gmail.com> I would like to just zoom into my specific problem here : We have a HP Proliant 480C blade that comes with a HP Gbe2c Layer3 switch built in (ie you can't see the NIC of the blade server, only switch ports). It's VMWare ESX & we have just built one Windows client in it & now we have just installed a second guest client (a noncluster RHEL Ver 5.2) : a)we have succeeded in building a Windows guest client inside this VMWare ESX earlier. The Windows client has an IP addr 10.51.x.y & it uses the very same switch & is able to get connected to other LANs. From this Windows client, I'm able to ping to other external servers on other subnets & vice-versa. The Layer 3 switch could ping to the Windows client too b)We just installed Redhat Linux 5.2 inside the same VMWare, it's able to to boot up fine, just that it's unable to even reach the built-in Layer 3 switch: From this Layer3 switch, I'm able to ping to the Windows guest client 10.51.x.y but from this built-in Layer3 switch, I'm unable to ping to the Linux client (with vswif0 interface addr 172.17.y.z up & running) c) I then tried to configure an IP addr for eth2 (172.17.y.t) in this Linux client, restart network (shown as Ok for both vswif0 & eth2 during "/etc/init.d/network restart" but it did not help make (still unreachable) What's the problem here? The switch has one default VLAN1 in it - must I specify "tagging" during the Linux installation to this VLAN1 or I'll need to create another VLAN in the HP built-in switch or ...... ? Is there a firewall within Linux which could have prevented these connectivity & if so how do we disable it? Any ideas/brainstorming suggestions appreciated Thanks U -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpeterso at redhat.com Fri Jul 18 18:18:07 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Fri, 18 Jul 2008 13:18:07 -0500 Subject: [Linux-cluster] Configuring only one node tu use gfs2 In-Reply-To: References: Message-ID: <1216405087.24666.40.camel@technetium.msp.redhat.com> On Thu, 2008-07-17 at 16:30 -0600, Gary Romo wrote: > Do you need to install cluster suite in order to use gfs? > > Gary Romo No, cluster suite is not needed for GFS2. Cluster Suite is kind of a RHEL4 term anyway. To use gfs2, all you need is a Linux kernel that has it. Hopefully you take a very recent kernel so that the code has fewer bugs in it. Oh, and you probably want a recent gfs2-utils package too so you can do things like mkfs.gfs2 and gfs2_fsck. Regards, Bob Peterson Red Hat Clustering & GFS From raju.rajsand at gmail.com Mon Jul 21 05:28:30 2008 From: raju.rajsand at gmail.com (Rajagopal Swaminathan) Date: Mon, 21 Jul 2008 10:58:30 +0530 Subject: [Linux-cluster] GFS on Shared Storage.. In-Reply-To: <487C3BCD.9000609@gmail.com> References: <0139539A634FD04A99C9B8880AB70CB209B17974@in-ex004.groupinfra.com> <487C3BCD.9000609@gmail.com> Message-ID: <8786b91c0807202228q4722421cof47e2862843b9fd6@mail.gmail.com> Greetings, On 7/15/08, GS R wrote: > > Singh Raina, Ajeet wrote: > >> >> Hi, >> >> Please help me getting rid of few confusions. >> >> I have setup two Cluster RHEL 4.0 Update2 Cluster Nodes.I installed all >> the RPM packages manually. >> >> "all the RPM packages" could you please list them. > >> >> I tried running a simple script and killing few process of first >> node,suddenly the other node took the reponsibilty and it was successful. >> >> I was running short of Shared Storage and Planned to setup iSCSI target >> (Shared Storage) and the two cluster nodes(initiator). >> >> Now I want to setup GFS.Do I have to setup GFS both on Cluster Nodes and >> Shared storage? >> >> You need the GFS module on both the nodes, connect to the iSCSI target and > then you will have to format the shared storage with GFS file system. Further the formatting of the GFS need to be done from only one node of the cluster. Have you checked the special lock setting for CLVM. One just need to mount the formatted columes on the other nodes Rajagopal Thanks > Gowrishankar Rajaiyan > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurizio.rottin at gmail.com Mon Jul 21 07:38:41 2008 From: maurizio.rottin at gmail.com (Maurizio Rottin) Date: Mon, 21 Jul 2008 09:38:41 +0200 Subject: [Linux-cluster] gfs2 performance In-Reply-To: References: <68f132770807141210j6786ef3bhb944316f37d153c5@mail.gmail.com> Message-ID: > 2008/7/14 Ozgur Akan : > > Hi, > > Unfortunately, we formatted 8TB volume with EXT3 and finally put it into > production. > > I am really disappointed with GFS2 performance, it is not fast enough for > large file systems with many files. On the other hand we still use GFS for a > 350gb partition with low IO. GFS has many good promises but only for some > specific environments with probably low IO, small number of files etc.. > > I think it can never be as fast as EXT3 because if its design and targets > but something close would make us more than happy. > i am not! I did a lot of benckmarking with bonnie++ on a 450 GB filesystem. i was testing ext3, gfs2, and gfs ext3 is obviously the fastest, but i nees a clustered file system, so it was only teken as a "best measure". Then i tried gfs2 and gfs, with one, two and three server writing with bonnie++ at the same time. The result showed gfs2 better than gfs in almost every bonnie++ test, and it was close enought to ext3 to use it. But always think that is a clustered filesystem! Then now i wait for your result on a bigger 8TB fs. -- mr From ben.yarwood at juno.co.uk Mon Jul 21 09:29:07 2008 From: ben.yarwood at juno.co.uk (Ben Yarwood) Date: Mon, 21 Jul 2008 10:29:07 +0100 Subject: [Linux-cluster] GFS assertion failure Message-ID: <00c101c8eb14$399ec980$acdc5c80$@yarwood@juno.co.uk> I have a three node cluster running latest 4.6 code with 14 gfs file systems running. On a three month old, heavily used gfs file system which has never had any problems, had no shared storage power outages or anything that I can think of that could have caused a problem in the fs, I got the following error and a withdraw: Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: fatal: assertion "FALSE" failed Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: function = xmote_bh Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: file = /builddir/build/BUILD/gfs-kernel-2.6.9-75/smp/src/gfs/glock.c, line = 1093 Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: time = 1216415126 Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: about to withdraw from the cluster Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: waiting for outstanding I/O Jul 18 22:05:26 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: telling LM to withdraw Jul 18 22:05:27 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: withdrawn Jul 18 22:05:27 jrmedia-c kernel: GFS: fsid=alpha_cluster:wav-4.2: ret = 0x00000002 The file system wouldn't unmount after this unfortunately and the only way to get the node up and running again was to do a fence. I checked bugzilla and can't find anything still open relating to this. Can anyone: 1. Suggest a good strategy for trying to get the fs unmounted so that a fence is not required and a normal reboot can be done? 2. Suggest what information I should have captured to better help debugging in the future, I think this would make a good FAQ and be helpful to all. Finally in the FAQ it says that after a gfs withdraws, the node should be rebooted before remounting, is this correct and is this related to replaying journals? What would happen if you didn't reboot? Cheers Ben From Alain.Moulle at bull.net Mon Jul 21 13:01:41 2008 From: Alain.Moulle at bull.net (Alain Moulle) Date: Mon, 21 Jul 2008 15:01:41 +0200 Subject: [Linux-cluster] CS5 / ip addr instead of node name in cluster.conf ? Message-ID: <488488B5.1000501@bull.net> Hi I think I remember that with CS4, it was possible to set IP addr instead of node name in cluster.conf such as : " did not show anything, "umount -f " did not work. ("umount -l " did the job) But when the clustermanager failed on that, it also failes on the MD script and goes into "failed" status, with a message that "manual intervention is needed". Why does the node not get fenced down? Upon "reboot -f" the service does not start until the faulty node is back online. Are there any magical things one can put in cluster.conf to get the behavior I want? That if a service does not want to stop cleanly, fence the node and start the service on another node? regards Jonas -- Jonas Helgi Palsson From fdinitto at redhat.com Tue Jul 22 04:16:38 2008 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Tue, 22 Jul 2008 06:16:38 +0200 Subject: [Linux-cluster] HA Cluster Developer Summit 2008: call for participants Message-ID: <1216700198.26234.36.camel@daitarn-fedora.int.fabbione.net> Hi, the HA Cluster Developer Summit 2008 will (1) take place in Prague (2), the beautiful capital of the Czech Republic, starting the 29th of Sept and finishing the 1st of October (3). Confirm your paticipation _before_ the end of July here: http://sources.redhat.com/cluster/wiki/ClusterSummit2008 Fill the table at the bottom of the page with the minimal required information. Your real name is required if you want us to book the hotel for you. Email address is required for us to contact you if more information are required. Make sure to confirm asap. If uncertain we will not count you for quorum. If you want to participate but prefer to keep your particiaption hidden from the wiki, mail me back with your information. The Summit schedule will be posted soon. Regards, Fabio (1) the summit will take place only if quorum is achieved. (2) http://en.wikipedia.org/wiki/Prague (3) the summit will last 3 days. People can participate from Monday morning to Wednesday afteroon. From lhh at redhat.com Tue Jul 22 15:02:50 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 22 Jul 2008 11:02:50 -0400 Subject: [Linux-cluster] Taking VM snapshots when services are stopped (patch) In-Reply-To: References: Message-ID: <1216738970.3302.45.camel@ayanami> On Mon, 2008-06-30 at 12:05 +0200, Federico Simoncelli wrote: > Hi all, I added the support for taking snapshots of virtual machines > when the services are stopped. > This avoids the immediate shutdown of the vm and the consequent > problems during the next boot. > > Usage example: > > > > I'm sure there's a lot we can take from the xendomains init script to > improve this feature. > My current patch in attachment. Suggestions are welcome. Pushed to master. Thank you for the patch! http://sources.redhat.com/git/?p=cluster.git;a=commit;h=a4abbf1dbfd3287ae70f3177d7d878919d80373b -- Lon From tiagocruz at forumgdh.net Tue Jul 22 16:22:36 2008 From: tiagocruz at forumgdh.net (Tiago Cruz) Date: Tue, 22 Jul 2008 13:22:36 -0300 Subject: [Linux-cluster] Problem adding a GNBD fence device Message-ID: <1216743756.30369.51.camel@tuxkiller.ig.com.br> Hello! I'm using conga (Luci -> Cluster -> Nodes -> Properties) and I'm trying to add one "Main Fencing Method" Fence Type: GDBD Name: data-partition Servers (whitespace separated list): hotsite-1.company.com IP Address: 10.65.9.30 When I click on "Update main fence device" I got this error: The following errors were found: An unknown fence device type was given: "gnbd." Shared Fence Devices: hotsites Shared Fence Devices for Cluster: hotsites Agent type: Global Network Block Device Name: data-partition Nodes using this device for fencing: * No nodes currently employ this fence device I need to add this fence-device one-by-one on my cluster or it's OK using only a "shared" device? That's the conf generated: Thanks -- Tiago Cruz http://everlinux.com Linux User #282636 From lhh at redhat.com Tue Jul 22 20:10:18 2008 From: lhh at redhat.com (Lon Hohberger) Date: Tue, 22 Jul 2008 16:10:18 -0400 Subject: [Linux-cluster] Node with failed service does not get fenced. In-Reply-To: <200807212335.39117.jonas@linpro.no> References: <200807212335.39117.jonas@linpro.no> Message-ID: <1216757418.30587.10.camel@ayanami> On Mon, 2008-07-21 at 23:35 +0200, Jonas Helgi Palsson wrote: > Hi > > Running CentOS 5.2, all current updates on x86_64 platform. > > I have set up a 2node cluster with following resources in one service > > * one shared MD device (the resource is a script that assembles and stops > the , device and checks its status). > * one shared filesystem, > * one shared NFS startup script, > * one shared ip. > > Which are started in that order. > > And the cluster works normaly, I can move the service between the two nodes. > > But I have observed one behavior that is not good. Once when trying to move > the service from one node to another, the clustermanager could not "umount" > the filesystem. > Although "lsof | grep " did not show anything, "umount -f > " did not work. ("umount -l " did the job) > Are there any magical things one can put in cluster.conf to get the behavior I > want? That if a service does not want to stop cleanly, fence the node and > start the service on another node? Add self_fence="1" to the resource. -- Lon From sunhux at gmail.com Wed Jul 23 01:37:43 2008 From: sunhux at gmail.com (sunhux G) Date: Wed, 23 Jul 2008 09:37:43 +0800 Subject: [Linux-cluster] 30 secs to login to two-node RHEL 5.2 cluster Message-ID: <60f08e700807221837u71979a20qe771a314a940d5ca@mail.gmail.com> Hi, We have a 2 node RHEL V5.2 cluster. On both nodes, it takes about 30 seconds from the time after I key in the password & hit ENTER to get to the command prompt. Yes, it consistently took that amount of time for "both" nodes. Also, on both nodes, after login, the command line response is normal/good. "top" did not show anything that chew up a lot of cpu/memory. I login using root on both nodes (on Bash shell) & no hardening has been done yet. When I manually run ". /etc/profile", .bashrc & ". /etc/bashrc" scripts, it completes in 1 sec >From /var/log/messages, don't see any clue other than lots of the following repeated messages : Jul 20 04:02:02 hostname syslogd 1.4.1: restart. Jul 20 04:02:43 hostname MR_MONITOR[3816]: Controller ID: 0 Patrol R ead complete Jul 20 04:30:04 hostname MR_MONITOR[3816]: Controller ID: 0 Time est ablished since power on: Time 2008-07-20,04:3 0:04 1440785Seconds Jul 20 05:00:04 hostname MR_MONITOR[3816]: Controller ID: 0 Time est ablished since power on: Time 2008-07-20,05:0 0:04 1442585Seconds Jul 20 05:30:04 hostname MR_MONITOR[3816]: Controller ID: 0 Time est ablished since power on: Time 2008-07-20,05:3 0:04 1444385Seconds Jul 20 06:00:04 hostname MR_MONITOR[3816]: Controller ID: 0 Time est ablished since power on: Time 2008-07-20,06:0 0:04 1446185Seconds Jul 20 06:30:04 hostname MR_MONITOR[3816]: Controller ID: 0 Time est ablished since power on: Time 2008-07-20,06:3 0:04 1447985Seconds What else should I look out for? Thanks U -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlopmart at gmail.com Wed Jul 23 21:00:57 2008 From: carlopmart at gmail.com (carlopmart) Date: Wed, 23 Jul 2008 23:00:57 +0200 Subject: [Linux-cluster] Using GFS2 to store vmware disk files Message-ID: <48879C09.6040509@gmail.com> Hi all, I have installed one host with rhel5.2 and vmware server 2.0 rc1. I stored vmdk files on a GFS2 partition with these params on /etc/fstab: rw,_netdev,noatime,noexec,nodev,nosuid. But performance is really really poor compared with ocfs2 for example or ext3 ... Somebody knows how can I increase GFS2 performance filesystem to store a lot of 2 GB vmdk files?? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com From tiagocruz at forumgdh.net Wed Jul 23 21:56:40 2008 From: tiagocruz at forumgdh.net (Tiago Cruz) Date: Wed, 23 Jul 2008 18:56:40 -0300 Subject: [Linux-cluster] fence_gnbd failed Message-ID: <1216850200.30369.109.camel@tuxkiller.ig.com.br> Hello, I have one machine (hotsite-bsb-la-1) exporting GNBD to two machines (hotsite-bsb-la-2 and "-3") The cluster with RHEL 5.2 x86_64 and GFS was working very well, util I reboot the hotsite-bsb-la-2: Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] CLM CONFIGURATION CHANGE Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] New Configuration: Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] r(0) ip(10.65.13.30) Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] r(0) ip(10.65.13.33) Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] Members Left: Jul 23 18:56:38 hotsite-bsb-la-1 kernel: dlm: closing connection to node 2 Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] r(0) ip(10.65.13.31) Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] Members Joined: Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] CLM CONFIGURATION CHANGE Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] New Configuration: Jul 23 18:56:38 hotsite-bsb-la-1 fenced[3099]: hotsite-bsb-la-2.com not a cluster member after 0 sec post_fail_delay Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] r(0) ip(10.65.13.30) Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] r(0) ip(10.65.13.33) Jul 23 18:56:38 hotsite-bsb-la-1 fenced[3099]: fencing node "hotsite-bsb-la-2.com" Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] Members Left: Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] Members Joined: Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [SYNC ] This node is within the primary component and will provide service. Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [TOTEM] entering OPERATIONAL state. Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] got nodejoin message 10.65.13.30 Jul 23 18:56:38 hotsite-bsb-la-1 fenced[3099]: fence "hotsite-bsb-la-2.com" failed Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CLM ] got nodejoin message 10.65.13.33 Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CPG ] got joinlist message from node 1 Jul 23 18:56:38 hotsite-bsb-la-1 openais[3082]: [CPG ] got joinlist message from node 3 Jul 23 18:56:43 hotsite-bsb-la-1 fenced[3099]: fencing node "hotsite-bsb-la-2.com.br" Jul 23 18:56:43 hotsite-bsb-la-1 fenced[3099]: fence "hotsite-bsb-la-2.com.br" failed Jul 23 19:00:57 hotsite-bsb-la-1 last message repeated 50 times Why fence was failing? Follow the cluster.conf: # cman_tool status Version: 6.1.0 Config Version: 18 Cluster Name: hotsites Cluster Id: 27589 Cluster Member: Yes Cluster Generation: 184 Membership state: Cluster-Member Nodes: 2 Expected votes: 3 Total votes: 2 Quorum: 2 Active subsystems: 8 Flags: Dirty Ports Bound: 0 177 Node name: hotsite-bsb-la-1.com Node ID: 1 Multicast addresses: 239.192.107.49 Node addresses: 10.65.13.30 Thanks -- Tiago Cruz http://everlinux.com Linux User #282636 From garromo at us.ibm.com Wed Jul 23 22:05:15 2008 From: garromo at us.ibm.com (Gary Romo) Date: Wed, 23 Jul 2008 16:05:15 -0600 Subject: [Linux-cluster] RHEL Upgrade running RHCS/GFS Message-ID: Hello group, I will be upgrading from RHEL4 to RHEL5, on a two-node cluster. I am using blade_center fencing, which is my biggest concern. I want to put all apps/dbs on node-2, then upgrade node-1. After node-1 is good, I want to attach it to the cluster, and fail-over all apps/dbs from node-2 to node-1. Then I will upgrade node-2, attach it to the cluster and be done. My biggest concern is with the fencing mechanism, i have seen it go buggy. What could I do to control fencing, while I upgrade the other servers? I need the apps/dbs up and running while I upgrade. Any suggestions or docs out there?, -Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From jos at xos.nl Wed Jul 23 22:19:46 2008 From: jos at xos.nl (Jos Vos) Date: Thu, 24 Jul 2008 00:19:46 +0200 Subject: [Linux-cluster] RHEL Upgrade running RHCS/GFS In-Reply-To: References: Message-ID: <20080723221946.GA22188@jasmine.xos.nl> On Wed, Jul 23, 2008 at 04:05:15PM -0600, Gary Romo wrote: > I will be upgrading from RHEL4 to RHEL5, on a two-node cluster. > I am using blade_center fencing, which is my biggest concern. > > I want to put all apps/dbs on node-2, then upgrade node-1. > After node-1 is good, I want to attach it to the cluster, and fail-over all > apps/dbs from node-2 to node-1. > Then I will upgrade node-2, attach it to the cluster and be done. AFAIK, RHEL4 and RHEL5 cluster nodes can not work together, cluster-wise, so it is not possible to migrate via this scenario. -- -- Jos Vos -- X/OS Experts in Open Systems BV | Phone: +31 20 6938364 -- Amsterdam, The Netherlands | Fax: +31 20 6948204 From Alain.Moulle at bull.net Thu Jul 24 06:10:49 2008 From: Alain.Moulle at bull.net (Alain Moulle) Date: Thu, 24 Jul 2008 08:10:49 +0200 Subject: [Linux-cluster] CS5 / IP addr instead of node name in cluster.conf seems not to work Message-ID: <48881CE9.30702@bull.net> Hi In the RHEL5 cluster.conf doc : http://sources.redhat.com/cluster/doc/cluster_schema_rhel5.html it is written : Tag: Per Node configuration Parent Tag: Attributes: * name(Required): The hostname or IP Address of the node likewie with CS4, but I tried to set IP address instead of hostname Hello cluster experts, I'm new here and new to cluster world too... I need some help, in order to setup a cluster in our organization. Shortly, our schema is: 2 routers for HA and load balancing - ar (active router) - br (backup router) 3 http servers located internaly acting as real web servers (rs1, rs2, rs3) behind ar and br routers. rs1=192.168.113.3/24 rs2=192.168.113.4/24 rs3=192.168.113.5/24 2 shared data servers (shd1, shd2) shd1=192.168.113.6/24 shd1=192.168.113.7/24 1 server for cluster management (rhclm) rhclm=192.168.113.8/24 I've configured ar and br routers for high availability and load banacing and everything is ok. Active router (ar) are forwarding http requests to VIP (floating) external ip address to internaly ip addresses of rs1, rs2, rs3 webservers. Now, i don't know how to: - configure and group some hard disks on our shd1 and sdh2 servers to form a shared volume for our rs1, rs2, rs3 real servers (i suppose that the correct topic should be shared volume using GFS...) - make usable this volume and act as DOCUMENT ROOT on our rs1, rs2 and rs3 webservers. All our servers are running centos 5.2 and has all updates installed. On rhclm (192.168.113.8) i installed cana and created a cluster with 2 nodes: shd1 and shd2. Cana, generated the following cluster.conf on shd1 and shd2 servers: [root at shd1 ~]# cat /etc/cluster/cluster.conf Now, on shd1 i am using hda for centos OS and hdb (1,2) i want to make it available to be used on shared volume: [root at shd1 ~]# cat /proc/partitions major minor #blocks name 3 64 39082680 hdb 3 65 19541056 hdb1 3 66 19541592 hdb2 [root at shd1 ~]# on shd2 i have hda for centos and hdc (1,2) i want it available to be used on shared volume: [root at shd2 ~]# cat /proc/partitions major minor #blocks name 22 0 78150744 hdc 22 1 39075088 hdc1 22 2 39075624 hdc2 [root at shd2 ~]# Using cana, i couldn't find a way to create a volume, grouping hdb1 (from shd1) together with hdc1 (from sdh2) in one volume. I want to do this for 2 reasons: - i want that volume to be mounted as document root on rs1, rs2, rs3 real webservers - i want that volume to be easy to extend adding new hdd on the fly of other computers to this volume (new hdd slices of other new computers). Can anybody tell me how can i do it? I don't know that for this design if correct to have: - all 5 servers (rs1, rs2, rs3, shd1, shd2) to be configured as nodes in the same cluster or - rs1, rs2, rs3 to be part of one cluster and shd1 and shd2 to form another cluster I read section: A.2. Configuring Shared Storage in this document http://www.centos.org/docs/5/html/Cluster_Administration/ap-httpd-service-CA.html but is not what i want. Can anybody help me. A link pointing me to the correct direction or a howto will be appreciated. Regards, Alx From theophanis_kontogiannis at yahoo.gr Thu Jul 24 12:44:25 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Thu, 24 Jul 2008 15:44:25 +0300 Subject: [Linux-cluster] Journal 0 locked on GFS2? gfs2_fsck gives no results! Message-ID: <00ac01c8ed8b$02b6f180$0824d480$@gr> Hello all I have a two node cluster on 5.2 with 2.6.18-92.1.6.el5.centos.plus running for some time with drbd 8.2 I also have gfs2-utils-0.1.44-1.el5_2.1 Suddenly when trying to mount, I started getting for my gfs2 (running on LV, over VG, over PV over DRBD): GFS2: fsid=: Trying to join cluster "lock_dlm", "tweety:gfs2-00" GFS2: fsid=tweety:gfs2-00.0: Joined cluster. Now mounting FS... GFS2: fsid=tweety:gfs2-00.0: jid=0, already locked for use GFS2: fsid=tweety:gfs2-00.0: jid=0: Looking at journal... GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = fs/gfs2/recovery.c, line = 239 GFS2: fsid=tweety:gfs2-00.0: about to withdraw this file system GFS2: fsid=tweety:gfs2-00.0: telling LM to withdraw dlm: closing connection to node 2 Trying to mount again the fs I get the same error. Trying to gfs2_fsck -vy /dev/mapper/vg0-data0 gives: Initializing fsck Initializing lists... Recovering journals (this may take a while)jid=0: Looking at journal... jid=0: Failed jid=1: Looking at journal... jid=1: Journal is clean. jid=2: Looking at journal... jid=2: Journal is clean. jid=3: Looking at journal... jid=3: Journal is clean. jid=4: Looking at journal... jid=4: Journal is clean. jid=5: Looking at journal... jid=5: Journal is clean. jid=6: Looking at journal... jid=6: Journal is clean. jid=7: Looking at journal... jid=7: Journal is clean. jid=8: Looking at journal... jid=8: Journal is clean. jid=9: Looking at journal... jid=9: Journal is clean. Journal recovery complete. Initializing special inodes... Validating Resource Group index. Level 1 RG check. (level 1 passed) 1392 resource groups found. Setting block ranges... Starting pass1 Checking metadata in Resource Group #0 Checking metadata in Resource Group #1 Checking metadata in Resource Group #2 .................... Checking metadata in Resource Group #1391 Pass1 complete Checking system inode 'master' System inode for 'master' is located at block 23 (0x17) Checking system inode 'root' System inode for 'root' is located at block 22 (0x16) Checking system inode 'inum' System inode for 'inum' is located at block 330990 (0x50cee) Checking system inode 'statfs' System inode for 'statfs' is located at block 330991 (0x50cef) Checking system inode 'jindex' System inode for 'jindex' is located at block 24 (0x18) Checking system inode 'rindex' System inode for 'rindex' is located at block 330992 (0x50cf0) Checking system inode 'quota' System inode for 'quota' is located at block 331026 (0x50d12) Checking system inode 'per_node' System inode for 'per_node' is located at block 328392 (0x502c8) Starting pass1b Looking for duplicate blocks... No duplicate blocks found Pass1b complete Starting pass1c Looking for inodes containing ea blocks... Pass1c complete Starting pass2 Checking system directory inode 'jindex' Checking system directory inode 'per_node' Checking system directory inode 'master' Checking system directory inode 'root' Checking directory inodes. Pass2 complete Starting pass3 Marking root inode connected Marking master directory inode connected Checking directory linkage. Pass3 complete Starting pass4 Checking inode reference counts. Pass4 complete Starting pass5 Verifying Resource Group #0 Verifying Resource Group #1 Verifying Resource Group #2 Verifying Resource Group #3 Verifying Resource Group #4 ............... Verifying Resource Group #1388 Verifying Resource Group #1389 Verifying Resource Group #1390 Verifying Resource Group #1391 Pass5 complete Writing changes to disk Syncing the device. Freeing buffers. gfs2_fsck complete My questions are: What it really means for GFS2 to have journal 0 locked. How to get out of this situation and make the fs mountable again? Should I try to write some garbage on journal 0 with gfs2_edit so to force gfs2_fsck to recover it? Thank you all for your time. Theophanis Kontogiannis -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordan at bobich.net Thu Jul 24 12:59:43 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Thu, 24 Jul 2008 13:59:43 +0100 (BST) Subject: [Linux-cluster] help on configuring a shared gfs volume in a load balanced http cluster In-Reply-To: <200807241531.48032.linux@vfemail.net> References: <200807241531.48032.linux@vfemail.net> Message-ID: So, shd machines are actually SANs. You will need to use something like DRBD if you want shd machines mirrored and ATAoE or iSCSI to export the volumes for the rs machines to mount. Then create a shared GFS on the ATAoE/iSCSI device. You may, however, find that for web servers (lots of small files, frequent access to same files from all nodes) NFS/NAS gives you better performance, with shds configured mirrored for fail-over by not load balanced (warm standby). If you need very high performance / low latencies from storage, you may want to look into something like seznamfs for replicating content from a single master server to multiple slaves (DAS). Gordan On Thu, 24 Jul 2008, Alex wrote: > Hello cluster experts, > > I'm new here and new to cluster world too... I need some help, in order to > setup a cluster in our organization. > > Shortly, our schema is: > > 2 routers for HA and load balancing > - ar (active router) > - br (backup router) > > 3 http servers located internaly acting as real web servers (rs1, rs2, rs3) > behind > ar and br routers. > rs1=192.168.113.3/24 > rs2=192.168.113.4/24 > rs3=192.168.113.5/24 > > 2 shared data servers (shd1, shd2) > shd1=192.168.113.6/24 > shd1=192.168.113.7/24 > > 1 server for cluster management (rhclm) > rhclm=192.168.113.8/24 > > I've configured ar and br routers for high availability and load banacing and > everything is ok. Active router (ar) are forwarding http requests to VIP > (floating) external ip address to internaly ip addresses of rs1, rs2, rs3 > webservers. > > Now, i don't know how to: > - configure and group some hard disks on our shd1 and sdh2 servers to > form a shared volume for our rs1, rs2, rs3 real servers (i suppose that the > correct topic should be shared volume using GFS...) > - make usable this volume and act as DOCUMENT ROOT on our rs1, rs2 and rs3 > webservers. > > All our servers are running centos 5.2 and has all updates installed. > > On rhclm (192.168.113.8) i installed cana and created a cluster with 2 nodes: > shd1 and shd2. > > Cana, generated the following cluster.conf on shd1 and shd2 servers: > > [root at shd1 ~]# cat /etc/cluster/cluster.conf > > > post_join_delay="3"/> > > > > > > > > > > > > > > > token_retransmits_before_loss_const="20"/> > > > Now, on shd1 i am using hda for centos OS and hdb (1,2) i want to make it > available to be used on shared volume: > > [root at shd1 ~]# cat /proc/partitions > major minor #blocks name > 3 64 39082680 hdb > 3 65 19541056 hdb1 > 3 66 19541592 hdb2 > [root at shd1 ~]# > > on shd2 i have hda for centos and hdc (1,2) i want it available to be used on > shared volume: > [root at shd2 ~]# cat /proc/partitions > major minor #blocks name > 22 0 78150744 hdc > 22 1 39075088 hdc1 > 22 2 39075624 hdc2 > [root at shd2 ~]# > > Using cana, i couldn't find a way to create a volume, grouping hdb1 (from > shd1) together with hdc1 (from sdh2) in one volume. I want to do this for 2 > reasons: > - i want that volume to be mounted as document root on rs1, rs2, rs3 real > webservers > - i want that volume to be easy to extend adding new hdd on the fly of other > computers to this volume (new hdd slices of other new computers). > > Can anybody tell me how can i do it? > > I don't know that for this design if correct to have: > - all 5 servers (rs1, rs2, rs3, shd1, shd2) to be configured as nodes in > the same cluster > or > - rs1, rs2, rs3 to be part of one cluster and shd1 and shd2 to form another > cluster > > I read section: A.2. Configuring Shared Storage in this document > http://www.centos.org/docs/5/html/Cluster_Administration/ap-httpd-service-CA.html > but is not what i want. > > Can anybody help me. A link pointing me to the correct direction or a howto > will be appreciated. > > Regards, > Alx > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > From rpeterso at redhat.com Thu Jul 24 13:50:55 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Thu, 24 Jul 2008 08:50:55 -0500 Subject: [Linux-cluster] Journal 0 locked on GFS2? gfs2_fsck gives no results! In-Reply-To: <00ac01c8ed8b$02b6f180$0824d480$@gr> References: <00ac01c8ed8b$02b6f180$0824d480$@gr> Message-ID: <1216907455.4003.37.camel@technetium.msp.redhat.com> On Thu, 2008-07-24 at 15:44 +0300, Theophanis Kontogiannis wrote: > GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error > > GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 > > GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = > fs/gfs2/recovery.c, line = 239 Hi Theophanis, I haven't seen this error before. It indicates a bad entry in the first journal. The gfs2_fsck program rejected it for the same reason that the GFS2 file system rejected it. I've been doing a lot of work on gfs2_fsck this week, so it would be an interesting for me to get a copy of your file system metadata (not any of the data) and run it through my latest fsck on one of my test systems. I'd also kind of like to examine the journal to see what's wrong with it and possibly give gfs2_fsck the ability to repair the damage. I can't make any promises though. If you're interested in doing this, run this command: gfs2_edit savemeta /dev/vg0/data0 /tmp/theophanis.metadata bzip2 /tmp/theophanis.metadata Then put the resulting .bz2 file on a server where I can get it. You can try this command on the pre-existing gfs2_edit program, but it might not save all of the metadata I need. I don't know how "up to date" Centos is in regards to gfs2_edit. You can also download the latest cluster git tree from source code, compile it, and run the latest version to make sure I get everything. If you're not willing to send me your metadata, you could run this command and email the output: gfs2_edit -p journal0 /dev/vg0/data0 > /tmp/journal0.txt Then I could at least try to determine what's wrong with the bad journal. Regards, Bob Peterson Red Hat Clustering & GFS From lajko.attila at ulx.hu Thu Jul 24 13:59:29 2008 From: lajko.attila at ulx.hu (=?ISO-8859-1?Q?Attila_Lajk=F3?=) Date: Thu, 24 Jul 2008 15:59:29 +0200 Subject: [Linux-cluster] cluster logging Message-ID: <5F36979F-7385-4BC5-A8B5-D3CCE501C8C1@ulx.hu> Hi, I'm using RHCS on RHEL4. How can I configure the cluster.conf to see all the debug messages of cman and ccsd? Regards, Attila Lajko From p_pavlos at freemail.gr Thu Jul 24 14:11:06 2008 From: p_pavlos at freemail.gr (Pavlos Parissis) Date: Thu, 24 Jul 2008 17:11:06 +0300 Subject: [Linux-cluster] cluster logging References: <5F36979F-7385-4BC5-A8B5-D3CCE501C8C1@ulx.hu> Message-ID: <48888d7a597997.69559843@freemail.gr> > Hi, > > I'm using RHCS on RHEL4. How can I configure the cluster.conf to see > all the debug messages of cman and ccsd? http://sourceware.org/cluster/faq.html#rgm_logging From jerlyon at gmail.com Thu Jul 24 16:05:37 2008 From: jerlyon at gmail.com (Jeremy Lyon) Date: Thu, 24 Jul 2008 10:05:37 -0600 Subject: [Linux-cluster] rdisc Message-ID: <779919740807240905s1b1a033ar5945d9e908792ffc@mail.gmail.com> Hi, I noticed the following messages when starting services. Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: ----224.0.0.2 rdisc Statistics---- Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: 3 packets transmitted, Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: 0 packets received, Jul 24 10:05:09 lxomt04e in.rdiscd[3763]: Jul 24 10:28:45 lxomt04e in.rdiscd[17755]: setsockopt (IP_ADD_MEMBERSHIP): Address already in use Jul 24 10:28:45 lxomt04e in.rdiscd[17755]: Failed joining addresses Jul 24 10:57:21 lxomt05e in.rdiscd[2327]: setsockopt (IP_ADD_MEMBERSHIP): Address already in use Jul 24 10:57:21 lxomt05e in.rdiscd[2327]: Failed joining addresses Are these anything to be concerned with? I noticed that /usr/share/cluster/ip.sh will either HUP or start rdisc. On one node I see the rdisc -fs process running and on the other node I don't, but both have the errors. [root at lxomt04e ~]# ps -ef | grep rdis root 18026 1 0 10:29 ? 00:00:00 rdisc -fs root 27270 26587 0 11:02 pts/1 00:00:00 grep rdis [root at lxomt04e ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.2 (Tikanga) [root at lxomt04e ~]# uname -a Linux lxomt04e 2.6.18-92.1.6.el5 #1 SMP Fri Jun 20 02:36:06 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux [root at lxomt04e ~]# rpm -q cman rgmanager cman-2.0.84-2.el5 rgmanager-2.0.38-2.el5_2.1 Thanks Jeremy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tiagocruz at forumgdh.net Thu Jul 24 17:20:10 2008 From: tiagocruz at forumgdh.net (Tiago Cruz) Date: Thu, 24 Jul 2008 14:20:10 -0300 Subject: [Linux-cluster] fence and gnbd Message-ID: <1216920010.6870.14.camel@tuxkiller.ig.com.br> Anyone here uses gnbd as fence device and can please share your cluster.conf with me? And, if possible, tell me how do you test the environment? Many thanks! - Tiago Cruz From beres.laszlo at sys-admin.hu Thu Jul 24 19:04:00 2008 From: beres.laszlo at sys-admin.hu (Laszlo BERES) Date: Thu, 24 Jul 2008 21:04:00 +0200 Subject: [Linux-cluster] cluster logging In-Reply-To: <48888d7a597997.69559843@freemail.gr> References: <5F36979F-7385-4BC5-A8B5-D3CCE501C8C1@ulx.hu> <48888d7a597997.69559843@freemail.gr> Message-ID: <4888D220.2070702@sys-admin.hu> Pavlos Parissis wrote: >> I'm using RHCS on RHEL4. How can I configure the cluster.conf to see >> all the debug messages of cman and ccsd? > > http://sourceware.org/cluster/faq.html#rgm_logging Attila asked about cman and ccs logging, but the FAQ above is about rgmanager's log messages. Isn't there an easy way to get cman debug messages in CS 4? -- Laszlo BERES RHCE, RHCX senior IT engineer, trainer From theophanis_kontogiannis at yahoo.gr Thu Jul 24 20:17:41 2008 From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis) Date: Thu, 24 Jul 2008 23:17:41 +0300 Subject: [Linux-cluster] Journal 0 locked on GFS2? gfs2_fsck gives no results! In-Reply-To: <1216907455.4003.37.camel@technetium.msp.redhat.com> References: <00ac01c8ed8b$02b6f180$0824d480$@gr> <1216907455.4003.37.camel@technetium.msp.redhat.com> Message-ID: <00d701c8edca$55a4a790$00edf6b0$@gr> Hi Bob, Thank you very much for your interest. No problem at all to send you the requested information. Anything to help the open source community.... (And as side effect, maybe get my data back :) ) I am sending you an off-list e-mail with download details. Thank you. Theophanis Kontogiannis -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Bob Peterson Sent: Thursday, July 24, 2008 4:51 PM To: linux clustering Subject: Re: [Linux-cluster] Journal 0 locked on GFS2? gfs2_fsck gives no results! On Thu, 2008-07-24 at 15:44 +0300, Theophanis Kontogiannis wrote: > GFS2: fsid=tweety:gfs2-00.0: fatal: filesystem consistency error > > GFS2: fsid=tweety:gfs2-00.0: inode = 4 25 > > GFS2: fsid=tweety:gfs2-00.0: function = jhead_scan, file = > fs/gfs2/recovery.c, line = 239 Hi Theophanis, I haven't seen this error before. It indicates a bad entry in the first journal. The gfs2_fsck program rejected it for the same reason that the GFS2 file system rejected it. I've been doing a lot of work on gfs2_fsck this week, so it would be an interesting for me to get a copy of your file system metadata (not any of the data) and run it through my latest fsck on one of my test systems. I'd also kind of like to examine the journal to see what's wrong with it and possibly give gfs2_fsck the ability to repair the damage. I can't make any promises though. If you're interested in doing this, run this command: gfs2_edit savemeta /dev/vg0/data0 /tmp/theophanis.metadata bzip2 /tmp/theophanis.metadata Then put the resulting .bz2 file on a server where I can get it. You can try this command on the pre-existing gfs2_edit program, but it might not save all of the metadata I need. I don't know how "up to date" Centos is in regards to gfs2_edit. You can also download the latest cluster git tree from source code, compile it, and run the latest version to make sure I get everything. If you're not willing to send me your metadata, you could run this command and email the output: gfs2_edit -p journal0 /dev/vg0/data0 > /tmp/journal0.txt Then I could at least try to determine what's wrong with the bad journal. Regards, Bob Peterson Red Hat Clustering & GFS -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster From carlopmart at gmail.com Fri Jul 25 07:04:30 2008 From: carlopmart at gmail.com (carlopmart) Date: Fri, 25 Jul 2008 09:04:30 +0200 Subject: [Linux-cluster] Re: Using GFS2 to store vmware disk files In-Reply-To: <48879C09.6040509@gmail.com> References: <48879C09.6040509@gmail.com> Message-ID: <48897AFE.4070009@gmail.com> carlopmart wrote: > Hi all, > > I have installed one host with rhel5.2 and vmware server 2.0 rc1. I > stored vmdk files on a GFS2 partition with these params on /etc/fstab: > rw,_netdev,noatime,noexec,nodev,nosuid. But performance is really really > poor compared with ocfs2 for example or ext3 ... > > Somebody knows how can I increase GFS2 performance filesystem to store > a lot of 2 GB vmdk files?? > > Many thanks. > Please, any hints?? -- CL Martinez carlopmart {at} gmail {d0t} com From p_pavlos at freemail.gr Fri Jul 25 07:45:46 2008 From: p_pavlos at freemail.gr (Pavlos Parissis) Date: Fri, 25 Jul 2008 10:45:46 +0300 Subject: [Linux-cluster] cluster logging References: <4888D220.2070702@sys-admin.hu> Message-ID: <488984aa222871.26809001@freemail.gr> > Pavlos Parissis wrote: > > >> I'm using RHCS on RHEL4. How can I configure the cluster.conf to see > >> all the debug messages of cman and ccsd? > > > > http://sourceware.org/cluster/faq.html#rgm_logging > > Attila asked about cman and ccs logging, but the FAQ above is about > rgmanager's log messages. > > Isn't there an easy way to get cman debug messages in CS 4? In my cluster conf I can change it in 2 places, in rm sections and here I don't know if you can change it for cman and ccs. Have you found anything on the man pages? Cheers, Pavlos From linux at vfemail.net Fri Jul 25 09:13:47 2008 From: linux at vfemail.net (Alex) Date: Fri, 25 Jul 2008 12:13:47 +0300 Subject: [Linux-cluster] help on configuring a shared gfs volume in a =?iso-8859-1?q?load=09balanced_http?= cluster In-Reply-To: References: <200807241531.48032.linux@vfemail.net> Message-ID: <200807251213.48564.linux@vfemail.net> On Thursday 24 July 2008 15:59, gordan at bobich.net wrote: > So, shd machines are actually SANs. You will need to use something like > DRBD if you want shd machines mirrored Hello Gordan, I am confused because i didn't do this job in a past and have no experience with this service. I would like to parse this task using small steps, in order to be able to understand what to do..., so my questions comes below: Actually, i want just to have hdb1 from shd1 and hdc1 from shd2 joided in one volume. No mirror for this volume at that momment. Is possible? If yes, how? Using ATAoE? After that, i would like to know, how to install GFS on this volume and use it as documennt root on our real web servers (rs1, rs2, rs3). Is possible? If yes, how? I don't understand from your explanation, how to group machines: shd1 and shd2 should be in one cluster and rs1, rs2 and rs3 in other cluster or: shd1 and shd2 shoud be regular servers which is just exporting their HDD using ATAoE and rs1, rs2 and rs2 to be grouped in one cluster which are importing a GFS volume from somwhere? If yes, from where? How can i configure a GFS volume on ATAoE disks and from where will be accesible? I need another one machine which will act as agregator for ATAoE disks or our real web servers (rs1, rs2, rs3) will responsible to import directly these disks? > and ATAoE or iSCSI to export the > volumes for the rs machines to mount. In our lab we are using regular hard disks, so iSCSI is excluded. I read an article here (http://www.linuxjournal.com/article/8149) about ATAoE and i have some questions: - on our centos 5.2 boxes, we already have aoe kernel module but we don't have aoe-stat command. Is any packet shoud i install via yum to have this command (or other command to hadle aoe disks) or is required do download aoetools-26.tar.gz and compile from source (http://sourceforge.net/projects/aoetools/) - in above article they are talking about RAID10, LVM and JFS. They are not teaching me about GFS and clustering. They choose JFS and not GFS saying that "JFS is a filesystem that can grow dynamically to large sizes, so he is going to put a JFS filesystem on a logical volume". I want that but using GFS, is possible or not? They are saying that: "using a cluster filesystem such as GFS, it is possible for multiple hosts on the Ethernet network to access the same block storage using ATA over Ethernet. There's no need for anything like an NFS server" "But there's a snag. Any time you're using a lot of disks, you're increasing the chances that one of the disks will fail. Usually you use RAID to take care of this issue by introducing some redundancy. Unfortunately, Linux software RAID is not cluster-aware. That means each host on the network cannot do RAID 10 using mdadm and have things simply work out." So, finally, what should i do? Can you or anybody suggest me some howtos and what is the correct order to group machines and implement clustering? Regards, Alx > > Then create a shared GFS on the ATAoE/iSCSI device. > You may, however, find that for web servers (lots of small files, frequent > access to same files from all nodes) NFS/NAS gives you better performance, > with shds configured mirrored for fail-over by not load balanced (warm > standby). > > If you need very high performance / low latencies from storage, you may > want to look into something like seznamfs for replicating content from a > single master server to multiple slaves (DAS). > > Gordan > > On Thu, 24 Jul 2008, Alex wrote: > > Hello cluster experts, > > > > I'm new here and new to cluster world too... I need some help, in order > > to setup a cluster in our organization. > > > > Shortly, our schema is: > > > > 2 routers for HA and load balancing > > - ar (active router) > > - br (backup router) > > > > 3 http servers located internaly acting as real web servers (rs1, rs2, > > rs3) behind > > ar and br routers. > > rs1=192.168.113.3/24 > > rs2=192.168.113.4/24 > > rs3=192.168.113.5/24 > > > > 2 shared data servers (shd1, shd2) > > shd1=192.168.113.6/24 > > shd1=192.168.113.7/24 > > > > 1 server for cluster management (rhclm) > > rhclm=192.168.113.8/24 > > > > I've configured ar and br routers for high availability and load banacing > > and everything is ok. Active router (ar) are forwarding http requests to > > VIP (floating) external ip address to internaly ip addresses of rs1, rs2, > > rs3 webservers. > > > > Now, i don't know how to: > > - configure and group some hard disks on our shd1 and sdh2 servers to > > form a shared volume for our rs1, rs2, rs3 real servers (i suppose that > > the correct topic should be shared volume using GFS...) > > - make usable this volume and act as DOCUMENT ROOT on our rs1, rs2 and > > rs3 webservers. > > > > All our servers are running centos 5.2 and has all updates installed. > > > > On rhclm (192.168.113.8) i installed cana and created a cluster with 2 > > nodes: shd1 and shd2. > > > > Cana, generated the following cluster.conf on shd1 and shd2 servers: > > > > [root at shd1 ~]# cat /etc/cluster/cluster.conf > > > > > > > post_join_delay="3"/> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > token_retransmits_before_loss_const="20"/> > > > > > > Now, on shd1 i am using hda for centos OS and hdb (1,2) i want to make it > > available to be used on shared volume: > > > > [root at shd1 ~]# cat /proc/partitions > > major minor #blocks name > > 3 64 39082680 hdb > > 3 65 19541056 hdb1 > > 3 66 19541592 hdb2 > > [root at shd1 ~]# > > > > on shd2 i have hda for centos and hdc (1,2) i want it available to be > > used on shared volume: > > [root at shd2 ~]# cat /proc/partitions > > major minor #blocks name > > 22 0 78150744 hdc > > 22 1 39075088 hdc1 > > 22 2 39075624 hdc2 > > [root at shd2 ~]# > > > > Using cana, i couldn't find a way to create a volume, grouping hdb1 (from > > shd1) together with hdc1 (from sdh2) in one volume. I want to do this for > > 2 reasons: > > - i want that volume to be mounted as document root on rs1, rs2, rs3 real > > webservers > > - i want that volume to be easy to extend adding new hdd on the fly of > > other computers to this volume (new hdd slices of other new computers). > > > > Can anybody tell me how can i do it? > > > > I don't know that for this design if correct to have: > > - all 5 servers (rs1, rs2, rs3, shd1, shd2) to be configured as nodes in > > the same cluster > > or > > - rs1, rs2, rs3 to be part of one cluster and shd1 and shd2 to form > > another cluster > > > > I read section: A.2. Configuring Shared Storage in this document > > http://www.centos.org/docs/5/html/Cluster_Administration/ap-httpd-service > >-CA.html but is not what i want. > > > > Can anybody help me. A link pointing me to the correct direction or a > > howto will be appreciated. > > > > Regards, > > Alx > > > > -- > > Linux-cluster mailing list > > Linux-cluster at redhat.com > > https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From tez at terryburton.co.uk Fri Jul 25 10:46:27 2008 From: tez at terryburton.co.uk (Terry Burton) Date: Fri, 25 Jul 2008 11:46:27 +0100 Subject: [Linux-cluster] CLVM mirror implementation Message-ID: <2d1f9a6b0807250346m40ac3be0y55dea697b463cb61@mail.gmail.com> Hi. I find CLVM an appealing approach to providing storage to Xen hosts. We can provide a single large SAN-backed LUN to all of our Xen hosts which we manage using CLVM which allows us to raise a VM on any Xen host and provide live migration between hosts. However, the SAN does not support volume mirroring and is therefore a single point of failure. I would like to move towards a HA solution using some form of cluster-aware RAID. I have been investigating some solutions for this and discovered Jonathan Brassows's report to the Red Hat Cluster Summit (year?) entitled "LVM mirroring" [1] in which he mentions the "cluster mirror implementation", presumably referring to the ability of CLVM to support mirror LVs? Following that line of thought I would like to present distinct LUNs from two (or more) seperate SANs to all Xen Dom0 and configure these as PVs belonging to a CLVM managed VG. I would then configure the LVs that we present as storage to each Xen DomU using "lvcreate -m {1,2} ..." so that each VMs storage is provided from at least two SANs, each of which can fail (or undergo maintainance) in isolation without loss of service to the VMs. Firstly, have I understood CLVM mirroring correctly in that it will eventually support this scenario? How close is CLVM mirroring to becoming a practical reality, including the rapid detection of mirror failure (dmeventd) and automatic recover of mirrors? Thanks for your time, Terry [1] http://devresources.linux-foundation.org/dev/clusters/docs/cluster_summit_mirror_paper.pdf From fog at t.is Fri Jul 25 10:59:16 2008 From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=) Date: Fri, 25 Jul 2008 10:59:16 -0000 Subject: [Linux-cluster] Node with failed service does not get fenced. References: <200807212335.39117.jonas@linpro.no> Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F7790103F20C@SKYHQAMX08.klasi.is> Hi, There is a flag you can use to force a reboot if unmount is not successful. See: http://kbase.redhat.com/faq/FAQ_51_11753.shtm K?r kve?ja / Best regards, Finnur ?. Gu?mundsson MCP - RHCA - Linux+ System Engineer - System Operations fog at t.is TM Software - Skyggnir Ur?arhvarf 6, IS- 203 K?pavogur, Iceland tel: + 354 545 3000-fax + 354 545 3001 www.t.is -----Original Message----- From: linux-cluster-bounces at redhat.com on behalf of Jonas Helgi Palsson Sent: Mon 7/21/2008 21:35 To: 'linux clustering' Subject: [Linux-cluster] Node with failed service does not get fenced. Hi Running CentOS 5.2, all current updates on x86_64 platform. I have set up a 2node cluster with following resources in one service * one shared MD device (the resource is a script that assembles and stops the , device and checks its status). * one shared filesystem, * one shared NFS startup script, * one shared ip. Which are started in that order. And the cluster works normaly, I can move the service between the two nodes. But I have observed one behavior that is not good. Once when trying to move the service from one node to another, the clustermanager could not "umount" the filesystem. Although "lsof | grep " did not show anything, "umount -f " did not work. ("umount -l " did the job) But when the clustermanager failed on that, it also failes on the MD script and goes into "failed" status, with a message that "manual intervention is needed". Why does the node not get fenced down? Upon "reboot -f" the service does not start until the faulty node is back online. Are there any magical things one can put in cluster.conf to get the behavior I want? That if a service does not want to stop cleanly, fence the node and start the service on another node? regards Jonas -- Jonas Helgi Palsson -- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An HTML attachment was scrubbed... URL: From gordan at bobich.net Fri Jul 25 11:05:04 2008 From: gordan at bobich.net (gordan at bobich.net) Date: Fri, 25 Jul 2008 12:05:04 +0100 (BST) Subject: [Linux-cluster] help on configuring a shared gfs volume in a load balanced http cluster In-Reply-To: <200807251213.48564.linux@vfemail.net> References: <200807241531.48032.linux@vfemail.net> <200807251213.48564.linux@vfemail.net> Message-ID: On Fri, 25 Jul 2008, Alex wrote: > On Thursday 24 July 2008 15:59, gordan at bobich.net wrote: >> So, shd machines are actually SANs. You will need to use something like >> DRBD if you want shd machines mirrored > > Hello Gordan, > > I am confused because i didn't do this job in a past and have no experience > with this service. I would like to parse this task using small steps, in > order to be able to understand what to do..., so my questions comes below: > > Actually, i want just to have hdb1 from shd1 and hdc1 from shd2 joided in one > volume. No mirror for this volume at that momment. Is possible? If yes, how? > Using ATAoE? Set up ATAoE on shd and use it to export a volume. Connect to this ATAoE share from the front end nodes. You can then use something like Cluster LVM (CLVM) to unify them into one volume. Then create GFS on this volume. Note that if you lose either of the two shd machines you will likely lose all the data. > After that, i would like to know, how to install GFS on this volume and use it > as documennt root on our real web servers (rs1, rs2, rs3). Is possible? If > yes, how? Yes, when you have the logical volume consisting of shd1 and shd2, create the GFS on it as per the docs (mkfs.gfs), mount it to where you want it, and point Apache at that path. Nothing magical about it, it's just like any other once you have it mounted. > I don't understand from your explanation, how to group machines: shd1 and shd2 > should be in one cluster and rs1, rs2 and rs3 in other cluster I don't see why you need shd1 and shd2 machines in a cluster. They are just SANs. Unless they are mirroring each other or beign each other's backup there is no immediately obvious reason from your example why they should be clustered together. > or: shd1 and shd2 shoud be regular servers which is just exporting their > HDD Yes. And you don't export the HDD per se using ATAoE or iSCSI - you export a "volume" (which is just a file on shd's file system that is effectively a disk image). > using ATAoE and rs1, rs2 and rs2 to be grouped in one cluster which > are importing a GFS volume from somwhere? rs machines would import the ATAoE volumes, establish a logical volume on top of them, and then start the GFS file system on top of that. > If yes, from where? How can i configure a GFS volume on > ATAoE disks and from where will be accesible? It will be accessible from any machine in the cluster the GFS volume is built for (in this case rs set), once they connect the ATAoE (or iSCSI if that's what you use for it, there isn't THAT much difference between them) shares from shds. > I need another one machine > which will act as agregator for ATAoE disks or our real web servers (rs1, > rs2, rs3) will responsible to import directly these disks? You don't need an agregator, you can unify the volumes using CLVM into one big logical volume, and have GFS live on top of that. >> and ATAoE or iSCSI to export the >> volumes for the rs machines to mount. > > In our lab we are using regular hard disks, so iSCSI is excluded. iSCSI is a network protocol, nothing to do with SCSI disks per se. It's SCSI-over-ethernet. You can export any file on a machine as a volume using iSCSI. Whether the underlying disk is SCSI, ATA or something exotic is entirely irrelevant. ATAoE and iSCSI are both applicable to your case. ATAoE has somewhat lower overheads (read: a little faster) but is ethernet layer based. iSCSI is TCP based so is routable. iSCSI is also a little more mature. > I read an article here (http://www.linuxjournal.com/article/8149) about ATAoE > and i have some questions: > > - on our centos 5.2 boxes, we already have aoe kernel module but we don't have > aoe-stat command. Is any packet shoud i install via yum to have this command > (or other command to hadle aoe disks) or is required do download > aoetools-26.tar.gz and compile from source > (http://sourceforge.net/projects/aoetools/) > > - in above article they are talking about RAID10, LVM and JFS. They are not > teaching me about GFS and clustering. They choose JFS and not GFS saying that > "JFS is a filesystem that can grow dynamically to large sizes, so he is going > to put a JFS filesystem on a logical volume". I want that but using GFS, is > possible or not? There are several concepts and technologies you need to go read up on before getting further with this: ATAoE iSCSI LVM/CLVM for volume management If you add additional volumes (e.g. exported via iSCSI or ATAoE) to your SAN boxes, you can add them into your CLVM volume you have GFS on top of, and the virtual "disk" (logical volume) will show as being bigger. You can then grow the GFS file system on this volume and have it extend onto the additional space. > They are saying that: > > "using a cluster filesystem such as GFS, it is possible for multiple hosts on > the Ethernet network to access the same block storage using ATA over > Ethernet. There's no need for anything like an NFS server" NFS and GFS are sort of equivalent, layer wise. > "But there's a snag. Any time you're using a lot of disks, you're increasing > the chances that one of the disks will fail. Usually you use RAID to take > care of this issue by introducing some redundancy. Unfortunately, Linux > software RAID is not cluster-aware. That means each host on the network > cannot do RAID 10 using mdadm and have things simply work out." What they are saying is that you can't export two ATAoE/iSCSI shares, have mdadm RAID on top, and then have GFS on top, because the mdadm layer isn't cluster aware. But you aren't using RAID on that level. RAID would be on the shd machines (hardware or mdadm RAID on the disks you use for storage, before any exporting via ATAoE or iSCSI happens. If you want the servers mirrored (i.e. RAID1), that's what you would use DRBD as I mentioned earlier. But then you wouldn't mount a share from each machine, you'd mount just one of the two, and have shds clustered for fail-over. > So, finally, what should i do? Can you or anybody suggest me some howtos and > what is the correct order to group machines and implement clustering? See above. Have a Google around for the things I mentioned, and ask more specific questions. :) Gordan From rpeterso at redhat.com Fri Jul 25 13:10:16 2008 From: rpeterso at redhat.com (Bob Peterson) Date: Fri, 25 Jul 2008 08:10:16 -0500 Subject: [Linux-cluster] Re: Using GFS2 to store vmware disk files In-Reply-To: <48897AFE.4070009@gmail.com> References: <48879C09.6040509@gmail.com> <48897AFE.4070009@gmail.com> Message-ID: <1216991416.4003.59.camel@technetium.msp.redhat.com> Hi, On Fri, 2008-07-25 at 09:04 +0200, carlopmart wrote: > carlopmart wrote: > > Hi all, > > > > I have installed one host with rhel5.2 and vmware server 2.0 rc1. I > > stored vmdk files on a GFS2 partition with these params on /etc/fstab: > > rw,_netdev,noatime,noexec,nodev,nosuid. But performance is really really > > poor compared with ocfs2 for example or ext3 ... > > > > Somebody knows how can I increase GFS2 performance filesystem to store > > a lot of 2 GB vmdk files?? > > > > Many thanks. > > > > Please, any hints?? First and most importantly, make sure you have a very recent version of the GFS2 kernel source. We've done some performance improvements recently. Second, you can tune your environment using the hints on the wiki: http://sources.redhat.com/cluster/wiki/FAQ/GFS#gfs_tuning Regards, Bob Peterson Red Hat Clustering & GFS From m at vanrossen.org Fri Jul 25 14:11:16 2008 From: m at vanrossen.org (Maarten van Rossen) Date: Fri, 25 Jul 2008 16:11:16 +0200 Subject: [Linux-cluster] service is only visible on one of the to cluster nodes Message-ID: Hi, I added a new service nlprc07q152 on my cluster but only one of the two cluster nodes will see it!! Look at my clustat output: [root at nlprc07b03 etc]# clustat Cluster Status for bangkok @ Fri Jul 25 15:57:09 2008 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ nlprc07b04.nlprc07.post.tnt 1 Online, rgmanager nlprc07b03.nlprc07.post.tnt 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:nlprc07q01 nlprc07b04.nlprc07.post.tnt started service:nlprc07q02 nlprc07b03.nlprc07.post.tnt started service:nlprc07q03 nlprc07b03.nlprc07.post.tnt started service:nlprc07q151 nlprc07b04.nlprc07.post.tnt started service:nlprc07q152 nlprc07b03.nlprc07.post.tnt started [root at nlprc07b03 etc]# [root at nlprc07b04 ~]# clustat\ > Cluster Status for bangkok @ Fri Jul 25 15:43:40 2008 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ nlprc07b04.nlprc07.post.tnt 1 Online, Local, rgmanager nlprc07b03.nlprc07.post.tnt 2 Online, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:nlprc07q01 nlprc07b04.nlprc07.post.tnt started service:nlprc07q02 nlprc07b03.nlprc07.post.tnt started service:nlprc07q03 nlprc07b03.nlprc07.post.tnt started service:nlprc07q151 nlprc07b04.nlprc07.post.tnt started [root at nlprc07b04 ~]# /etc/cmcluster is the same on both machines: [root at nlprc07b03 ~]# md5sum /etc/cluster/cluster.conf 19d8bcc71bf80106e9a571aa53570538 /etc/cluster/cluster.conf [root at nlprc07b04 ~]# md5sum /etc/cluster/cluster.conf 19d8bcc71bf80106e9a571aa53570538 /etc/cluster/cluster.conf Does anyone know how to solve this? regards, Maarten From linux at vfemail.net Fri Jul 25 15:37:54 2008 From: linux at vfemail.net (Alex) Date: Fri, 25 Jul 2008 18:37:54 +0300 Subject: [Linux-cluster] help on configuring a shared gfs volume in a =?iso-8859-1?q?load=09balanced_http?= cluster In-Reply-To: References: <200807241531.48032.linux@vfemail.net> <200807251213.48564.linux@vfemail.net> Message-ID: <200807251837.54867.linux@vfemail.net> Hi Gordan, Thanks for your reply... I have performed some small steps forward... > > Actually, i want just to have hdb1 from shd1 and hdc1 from shd2 joided in > > one volume. No mirror for this volume at that momment. Is possible? If > > yes, how? Using ATAoE? > > Set up ATAoE on shd and use it to export a volume. Connect to this ATAoE > share from the front end nodes. You can then use something like Cluster > LVM (CLVM) to unify them into one volume. Now, I setup ATAoE inside shd1 and shd2 and now, i am able to see exported disks on my redhat cluster manager machine (rhclm), so should be the same view on all our rs1, rs2 and rs3 webservers. See below: [root at rhclm ~]# cat /proc/partitions major minor #blocks name 3 0 19551168 hda 3 1 98248 hda1 3 2 1 hda2 3 5 1000408 hda5 3 6 8673808 hda6 152 560 78150744 etherd/e2.3 152 561 39075088 etherd/e2.3p1 152 562 39075624 etherd/e2.3p2 152 288 39082680 etherd/e1.2 152 289 19541056 etherd/e1.2p1 152 290 19541592 etherd/e1.2p2 [root at rhclm ~]# aoe-stat e1.2 40.020GB eth0 up e2.3 80.026GB eth0 up [root at rhclm ~]# [root at rhclm ~]# ls -l /dev/etherd/* brw-r----- 1 root disk 152, 288 Jul 25 17:47 /dev/etherd/e1.2 brw-r----- 1 root disk 152, 289 Jul 25 17:47 /dev/etherd/e1.2p1 brw-r----- 1 root disk 152, 290 Jul 25 17:47 /dev/etherd/e1.2p2 brw-r----- 1 root disk 152, 560 Jul 25 17:47 /dev/etherd/e2.3 brw-r----- 1 root disk 152, 561 Jul 25 17:47 /dev/etherd/e2.3p1 brw-r----- 1 root disk 152, 562 Jul 25 17:47 /dev/etherd/e2.3p2 [root at rhclm ~]# So, is exacly what i exported from shd1 and shd2 servers (hdb -> 2*20GB slices and hdc -> 2*40GB slices) >From here i am lost. I understand from you, that now, on one of our realservers, let say rs1, shoud i use CLVM to unify them. I want to obtain 2 volumes: VOL1=e1.2p1+e2.3p1 VOL2=e1.2p2+e2.3p2 I am not sure how to do it using clvm... All is crossing my mind is mdadm... mdadm -C /dev/md0 -l 0 -n 2 \ /dev/etherd/e1.2p1 /dev/etherd/e2.3p1 and after that, to use: pvcreate /dev/md0 vgcreate extendible_lvm /dev/md0 lvcreate --extents 60GB --name extendible_lvm www_docroot and finally mkfs.gfs /dev/extendible_lvm/www_docroot Can you help me regading clvm? Regards, Alx > > Then create GFS on this volume. > > Note that if you lose either of the two shd machines you will likely lose > all the data. > > > After that, i would like to know, how to install GFS on this volume and > > use it as documennt root on our real web servers (rs1, rs2, rs3). Is > > possible? If yes, how? > > Yes, when you have the logical volume consisting of shd1 and shd2, create > the GFS on it as per the docs (mkfs.gfs), mount it to where you want it, > and point Apache at that path. Nothing magical about it, it's just like > any other once you have it mounted. > > > I don't understand from your explanation, how to group machines: shd1 and > > shd2 should be in one cluster and rs1, rs2 and rs3 in other cluster > > I don't see why you need shd1 and shd2 machines in a cluster. They are > just SANs. Unless they are mirroring each other or beign each other's > backup there is no immediately obvious reason from your example why they > should be clustered together. > > > or: shd1 and shd2 shoud be regular servers which is just exporting their > > HDD > > Yes. And you don't export the HDD per se using ATAoE or iSCSI - you export > a "volume" (which is just a file on shd's file system that is effectively > a disk image). > > > using ATAoE and rs1, rs2 and rs2 to be grouped in one cluster which > > are importing a GFS volume from somwhere? > > rs machines would import the ATAoE volumes, establish a logical volume on > top of them, and then start the GFS file system on top of that. > > > If yes, from where? How can i configure a GFS volume on > > ATAoE disks and from where will be accesible? > > It will be accessible from any machine in the cluster the GFS volume is > built for (in this case rs set), once they connect the ATAoE (or iSCSI if > that's what you use for it, there isn't THAT much difference between > them) shares from shds. > > > I need another one machine > > which will act as agregator for ATAoE disks or our real web servers (rs1, > > rs2, rs3) will responsible to import directly these disks? > > You don't need an agregator, you can unify the volumes using CLVM into one > big logical volume, and have GFS live on top of that. > > >> and ATAoE or iSCSI to export the > >> volumes for the rs machines to mount. > > > > In our lab we are using regular hard disks, so iSCSI is excluded. > > iSCSI is a network protocol, nothing to do with SCSI disks per se. > It's SCSI-over-ethernet. You can export any file on a machine as a volume > using iSCSI. Whether the underlying disk is SCSI, ATA or something exotic > is entirely irrelevant. > > ATAoE and iSCSI are both applicable to your case. ATAoE has somewhat lower > overheads (read: a little faster) but is ethernet layer based. iSCSI is > TCP based so is routable. iSCSI is also a little more mature. > > > I read an article here (http://www.linuxjournal.com/article/8149) about > > ATAoE and i have some questions: > > > > - on our centos 5.2 boxes, we already have aoe kernel module but we don't > > have aoe-stat command. Is any packet shoud i install via yum to have this > > command (or other command to hadle aoe disks) or is required do download > > aoetools-26.tar.gz and compile from source > > (http://sourceforge.net/projects/aoetools/) > > > > - in above article they are talking about RAID10, LVM and JFS. They are > > not teaching me about GFS and clustering. They choose JFS and not GFS > > saying that "JFS is a filesystem that can grow dynamically to large > > sizes, so he is going to put a JFS filesystem on a logical volume". I > > want that but using GFS, is possible or not? > > There are several concepts and technologies you need to go read up on > before getting further with this: > ATAoE > iSCSI > LVM/CLVM for volume management > > If you add additional volumes (e.g. exported via iSCSI or ATAoE) to your > SAN boxes, you can add them into your CLVM volume you have GFS on top of, > and the virtual "disk" (logical volume) will show as being bigger. You can > then grow the GFS file system on this volume and have it extend onto the > additional space. > > > They are saying that: > > > > "using a cluster filesystem such as GFS, it is possible for multiple > > hosts on the Ethernet network to access the same block storage using ATA > > over Ethernet. There's no need for anything like an NFS server" > > NFS and GFS are sort of equivalent, layer wise. > > > "But there's a snag. Any time you're using a lot of disks, you're > > increasing the chances that one of the disks will fail. Usually you use > > RAID to take care of this issue by introducing some redundancy. > > Unfortunately, Linux software RAID is not cluster-aware. That means each > > host on the network cannot do RAID 10 using mdadm and have things simply > > work out." > > What they are saying is that you can't export two ATAoE/iSCSI shares, have > mdadm RAID on top, and then have GFS on top, because the mdadm layer isn't > cluster aware. But you aren't using RAID on that level. > > RAID would be on the shd machines (hardware or mdadm RAID on the disks you > use for storage, before any exporting via ATAoE or iSCSI happens. > > If you want the servers mirrored (i.e. RAID1), that's what you would use > DRBD as I mentioned earlier. But then you wouldn't mount a share from each > machine, you'd mount just one of the two, and have shds clustered for > fail-over. > > > So, finally, what should i do? Can you or anybody suggest me some howtos > > and what is the correct order to group machines and implement clustering? > > See above. Have a Google around for the things I mentioned, and ask more > specific questions. :) > > Gordan > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From pariviere at ippon.fr Fri Jul 25 16:10:40 2008 From: pariviere at ippon.fr (Pierre-Alain RIVIERE) Date: Fri, 25 Jul 2008 18:10:40 +0200 Subject: [Linux-cluster] Mix LVM locking type (1 and 2) on the same host Message-ID: <1217002241.8248.14.camel@t61> Hello, I'm trying to setup a 3 nodes CLVM cluster : alpha, beta and omega. My setup is pretty simple right now, with the following cluster.conf. I think cluster state is OK as : -------------------------------------------- # cman_tool nodes Node Sts Inc Joined Name 1 M 100 2008-07-25 17:40:12 alpha 2 M 100 2008-07-25 17:40:12 beta 100 M 48 2008-07-25 17:40:12 omega --------------------------------------------- # group_tool type level name id state fence 0 default 00010002 none [1 2 100] dlm 1 clvmd 00020002 none [1 2 100] ---------------------------------------------- lvm just works as expected on alpha and beta. But on omega, when I try to use lvm, I've got these errors : ----------------------------------------------- # lvcreate -L10M -nsrv04 xenvg Rounding up size to full physical extent 12.00 MB Error locking on node omega: device-mapper: reload ioctl failed: Invalid argument Failed to activate new LV. ----------------------------------------------- The 3 servers have the same installation (ubuntu 8.0.4.1, cman, clm), same configuration BUT on omega I use LVM on another VG which is not clustered. ----------------------------------------------- # vgdisplay --- Volume group --- VG Name data System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 15.00 GB PE Size 4.00 MB Total PE 3839 Alloc PE / Size 256 / 1.00 GB Free PE / Size 3583 / 14.00 GB VG UUID ATgVwZ-U5Gh-SxMW-91DO-vt9u-TkJa-XXQNQa --- Volume group --- VG Name xenvg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable Clustered yes Shared no MAX LV 0 Cur LV 4 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1020.00 MB PE Size 4.00 MB Total PE 255 Alloc PE / Size 15 / 60.00 MB Free PE / Size 240 / 960.00 MB VG UUID NKq9S2-4biX-7nRW-35Ww-OTHh-xTwH-q4VMQi ----------------------------------------------- I use this LVM configuration on the 3 nodes : locking_library = "liblvm2clusterlock.so" locking_type = 2 library_dir = "/lib/lvm2" Is there's a know problem using this mix configuration on the same server? Thks. From david.costakos at gmail.com Sat Jul 26 03:05:43 2008 From: david.costakos at gmail.com (Dave Costakos) Date: Fri, 25 Jul 2008 20:05:43 -0700 Subject: [Linux-cluster] GFS volume filled to the brim - "No space left on device" although still data blocks free In-Reply-To: <200807150944.30832.rottmann@atix.de> References: <200807150944.30832.rottmann@atix.de> Message-ID: <6b6836c60807252005i6bf1d02cjbf80c4fbf851a9c6@mail.gmail.com> I just wanted to confirm that I saw just this exact same issue today on a stock RHEL 4 Update 5. Running gfs_tool reclaim made my file system "work" again. # rpm -qa | grep -e 'GFS\|cman\|magma\|ccs'|sort ccs-1.0.10-0 ccs-devel-1.0.10-0 cman-1.0.17-0 cman-devel-1.0.17-0 cman-kernel-xenU-2.6.9-50.2 cman-kernheaders-2.6.9-50.2 GFS-6.1.14-0 GFS-kernel-xenU-2.6.9-72.2 magma-1.0.7-1 magma-plugins-1.0.12-0 # uname -rm 2.6.9-55.ELxenU x86_64 2008/7/15 Reiner Rottmann > Hello everyone, > > I've experienced strange behavior on a 20 GB GFS formatted volume (although > same behaviour applies to smaller and larger sizes) when reaching the max > available disk space by writing lots of 256 byte files in a nested > directory > structure (~15k files in one dir). > > The expected behaviour would be that all free data blocks are transformed > to > inodes and metadata as required but although there are still plenty > datablocks free, new 256 byte files cannot be created due to "No space left > on device". > > After that, when creating sequential files via touch, it is expected that > they > are created till all data blocks are transformed in inodes representing the > files. When all data blocks are used, "No space left on device" is > expected. > But in this strange scenario, files are created at random!? > > Also when executing gfs_tool reclaim, new files are createable again. But > gfs_tool reclaim only should increase the number of already available free > data blocks by cleaning unused metadata blocks. > > In my understanding, it should not be necessary to reclaim blocks, if there > are still free data blocks left. > > Has anyone an explanation for this? > > Best regards, > > Reiner Rottmann > > > --%<--------------------------------------------------------------------------- > (Filesystem filled with 256 byte files.) > > # for i in $(seq 1 1000); do touch waste.$i; done > touch: cannot touch `waste.3': No space left on device > touch: cannot touch `waste.6': No space left on device > touch: cannot touch `waste.12': No space left on device > touch: cannot touch `waste.13': No space left on device > touch: cannot touch `waste.15': No space left on device > touch: cannot touch `waste.16': No space left on device > touch: cannot touch `waste.20': No space left on device > touch: cannot touch `waste.25': No space left on device > touch: cannot touch `waste.28': No space left on device > touch: cannot touch `waste.29': No space left on device > touch: cannot touch `waste.32': No space left on device > touch: cannot touch `waste.37': No space left on device > touch: cannot touch `waste.38': No space left on device > touch: cannot touch `waste.39': No space left on device > touch: cannot touch `waste.48': No space left on device > touch: cannot touch `waste.55': No space left on device > touch: cannot touch `waste.56': No space left on device > touch: cannot touch `waste.59': No space left on device > touch: cannot touch `waste.60': No space left on device > touch: cannot touch `waste.63': No space left on device > ^C > > # for i in $(seq 1 1000); do touch waste2.$i; done > touch: cannot touch `waste2.1': No space left on device > touch: cannot touch `waste2.8': No space left on device > touch: cannot touch `waste2.10': No space left on device > touch: cannot touch `waste2.11': No space left on device > touch: cannot touch `waste2.12': No space left on device > touch: cannot touch `waste2.14': No space left on device > touch: cannot touch `waste2.17': No space left on device > touch: cannot touch `waste2.19': No space left on device > touch: cannot touch `waste2.21': No space left on device > touch: cannot touch `waste2.24': No space left on device > touch: cannot touch `waste2.28': No space left on device > touch: cannot touch `waste2.31': No space left on device > touch: cannot touch `waste2.32': No space left on device > touch: cannot touch `waste2.33': No space left on device > touch: cannot touch `waste2.40': No space left on device > touch: cannot touch `waste2.43': No space left on device > touch: cannot touch `waste2.44': No space left on device > touch: cannot touch `waste2.49': No space left on device > touch: cannot touch `waste2.54': No space left on device > touch: cannot touch `waste2.55': No space left on device > touch: cannot touch `waste2.57': No space left on device > touch: cannot touch `waste2.58': No space left on device > touch: cannot touch `waste2.61': No space left on device > ^C > > # gfs_tool df . > /mnt/gfstest: > SB lock proto = "lock_dlm" > SB lock table = "axqa01:gfstest" > SB ondisk format = 1309 > SB multihost format = 1401 > Block size = 1024 > Journals = 3 > Resource Groups = 78 > Mounted lock proto = "lock_dlm" > Mounted lock table = "axqa01:gfstest" > Mounted host data = "" > Journal number = 0 > Lock module flags = > Local flocks = FALSE > Local caching = FALSE > Oopses OK = FALSE > > Type Total Used Free use% > ------------------------------------------------------------------------ > inodes 18343309 18343309 0 100% > metadata 1690156 1687524 2632 100% > data 43931 0 43931 0% > > # rpm -qa | grep -e 'GFS\|cman\|magma\|ccs'|sort > GFS-6.1.15-1 > GFS-kernel-2.6.9-60.9 > GFS-kernel-2.6.9-75.11 > GFS-kernel-smp-2.6.9-60.9 > GFS-kernel-smp-2.6.9-75.11 > ccs-1.0.11-1 > cman-1.0.17-0.el4_6.3 > cman-kernel-smp-2.6.9-45.15 > cman-kernel-smp-2.6.9-53.8 > magma-1.0.8-1 > magma-devel-1.0.8-1 > magma-plugins-1.0.12-0 > > # cat /etc/redhat-release > Red Hat Enterprise Linux AS release 4 (Nahant Update 6) > > # uname -a > Linux realserver10 2.6.9-67.0.4.ELsmp #1 SMP Fri Jan 18 05:00:00 EST 2008 > x86_64 x86_64 x86_64 GNU/Linux > > --%<--------------------------------------------------------------------------- > > -- > Gruss / Regards, > > Dipl.-Ing. (FH) Reiner Rottmann > > Phone: +49-89 452 3538-12 > > http://www.atix.de/ > http://open-sharedroot.org/ > > PGP Key ID: 0xCA67C5A6 > PGP Key Fingerprint = BF59FF006360B6E8D48F26B10D9F5A84CA67C5A6 > > ** > ATIX Informationstechnologie und Consulting AG > Einsteinstr. 10 > 85716 Unterschleissheim > Deutschland/Germany > > Phone: +49-89 452 3538-0 > Fax: +49-89 990 1766-0 > > Registergericht: Amtsgericht Muenchen > Registernummer: HRB 168930 > USt.-Id.: DE209485962 > > Vorstand: > Marc Grimme, Mark Hlawatschek, Thomas Merz (Vors.) > > Vorsitzender des Aufsichtsrats: > Dr. Martin Buss > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Dave Costakos mailto:david.costakos at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From danb at zu.com Sat Jul 26 06:30:00 2008 From: danb at zu.com (Dan Brown) Date: Sat, 26 Jul 2008 00:30:00 -0600 Subject: [Linux-cluster] Clustering Propagation Times? Message-ID: <00cf01c8eee9$0b496850$21dc38f0$@com> Hi, This isn't a direct RHCS related question so feel free to point me to a better list or respond off list, etc. I've been following this list mostly for the purposes of attempting to get my DRBD+GFS2+CMAN clustering working (so far unsuccessfully as it doesn't appear to fence). I don't think I've done any posting yet, just lurking. In the mean time while troubleshooting DRBD I've been running two geographically distant groups of web servers, replicating data to each other in several groups of local/remote csync2 "clusters". I guess it's not technically a clustering system at all but rather a file mirroring system similar to constantly rsync'ing files back and forth between each server pair. I've added some wrapper scripts to provide some locking and prevent it from running synchronously since it tends to fail when it does. Replication however still isn't a speedy process (hence my attempts to utilize DRBD/GFS to satisfy those above me) with a minimum max replication time of two minutes. Replication is cron'd to run on one server during odd minutes, on the other server on even minutes. We've got a 2xT1 on one end and a 10M line on the other. A 100MB file for example would take 3 minutes 13 seconds to transfer or so if both the local and remote connections were relatively idle and nearly maxes out a 2xT1 line. A 512mb file about ten minutes. I would suspect that other clustered disk replication technologies out there (although I haven't seen many methods to do this other than DRBD, rsync, and home brew solutions) would have similar replication times and be limited almost entirely by connection speed and traffic congestion. So how do you deal with replication delays in a web cluster where two browsers hitting different servers are supposed to both be able to download the same 50mb PDF but one server may not have that entire file yet. Phrases like "build scheduling into the CMS" tend to be like an old stick of dynamite around here. ___ Dan Brown danb at zu.com From nick at javacat.f2s.com Sun Jul 27 09:11:05 2008 From: nick at javacat.f2s.com (Nick Lunt) Date: Sun, 27 Jul 2008 10:11:05 +0100 Subject: [Linux-cluster] GFS only - no cluster Message-ID: <000301c8efc8$b3289ca0$1979d5e0$@f2s.com> Hi Folks, Tomorrow I've got to create some GFS partitions over iSCSI and I'd like to know if the method I plan on using is correct or not. We will have 5 load balanced RHAS 5u2 servers sharing some GFS partitions. Here's my plan of action. 1. install gfs-utils (don't install clustering or "cluster storage"). 2. present the LUNs to each server. 3. do lvm stuff on each server. 3. mkfs.gfs2 -j5 -p lock_dlm -t what:goeshere /dev/vg02/lvolx . 4. mount from each server and put in fstab on each server. My problem is at number 3. If I want to use lock_dlm I need to specify a table . However I do not want to setup a cluster I just want GFS. I can use lock_nolock which does not require a table but will that render the filesystem useless with >1 server having access to it ? Unfortunately I've found no documentation on a GFS only setup, I would be very grateful if somebody could help me out. Many thanks, Nick . From gordan at bobich.net Sun Jul 27 12:38:56 2008 From: gordan at bobich.net (Gordan Bobic) Date: Sun, 27 Jul 2008 13:38:56 +0100 Subject: [Linux-cluster] GFS only - no cluster In-Reply-To: <000301c8efc8$b3289ca0$1979d5e0$@f2s.com> References: <000301c8efc8$b3289ca0$1979d5e0$@f2s.com> Message-ID: <488C6C60.50102@bobich.net> Nick Lunt wrote: > Here's my plan of action. > > 1. install gfs-utils (don't install clustering or "cluster storage"). > 2. present the LUNs to each server. > 3. do lvm stuff on each server. > 3. mkfs.gfs2 -j5 -p lock_dlm -t what:goeshere /dev/vg02/lvolx . > 4. mount from each server and put in fstab on each server. > > My problem is at number 3. If I want to use lock_dlm I need to specify a > table . However I do not want to setup a cluster I > just want GFS. You don't have a choice. Clustering isn't optional. GFS will not work without clustering because DLM depends on it to establish quorum and suchlike. > I can use lock_nolock which does not require a table but will that render > the filesystem useless with >1 server having access to it ? That will corrupt the FS because locking won't work. > Unfortunately I've found no documentation on a GFS only setup, I would be > very grateful if somebody could help me out. The reason there's no documentation is because it's not possible. What's the problem with having cluster running, though? It only takes a few lines of XML in cluster.conf, and this is reasonably well documented. You don't need any failover services, just the node entries. You could use OCFS2 instead, which doesn't require RHCS, but you will still need to set up a roughly equivalent configuration to get it's own locking working, so it still wouldn't save you any effort. Gordan From sunhux at gmail.com Mon Jul 28 01:58:04 2008 From: sunhux at gmail.com (sunhux G) Date: Mon, 28 Jul 2008 09:58:04 +0800 Subject: [Linux-cluster] Any other way to install latest patches on RHEL if there's no access to Internet Message-ID: <60f08e700807271858g12f46ceg47758d12ee93e317@mail.gmail.com> Hi, If our RHEL servers (Ver 4.6 on Sun X platforms) do not have access to Internet, I suppose it's not possible for us to use up2date (or yum) to install latest patches. Is there any other way of getting the latest Redhat patches installed? Can we get the current installed patches on our servers & then compare it against the latest list in Redhat, download manually to a thumb drive & then install? Granting Internet access appears to be an issue currently Thanks U -------------- next part -------------- An HTML attachment was scrubbed... URL: From ajeet.singh.raina at logica.com Mon Jul 28 06:00:59 2008 From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet) Date: Mon, 28 Jul 2008 11:30:59 +0530 Subject: [Linux-cluster] Why We create Priquorum Partition? Message-ID: <0139539A634FD04A99C9B8880AB70CB209B179BB@in-ex004.groupinfra.com> Hello Guys, I want to setup Clustering and I read in manual I read We Need to Create minimum 10 MB to 200 MB priquorum disk partition.Whats That? Anybody who can explain why Cluster needs that space for? Thanks in advance, Ajeet This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Santosh.Panigrahi at in.unisys.com Mon Jul 28 06:16:27 2008 From: Santosh.Panigrahi at in.unisys.com (Panigrahi, Santosh Kumar) Date: Mon, 28 Jul 2008 11:46:27 +0530 Subject: [Linux-cluster] Why We create Priquorum Partition? In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B179BB@in-ex004.groupinfra.com> References: <0139539A634FD04A99C9B8880AB70CB209B179BB@in-ex004.groupinfra.com> Message-ID: Quorum disk is required to avoid a spilt brain situation in red hat cluster, of having cluster nodes more than 2. In a cluster, if partitioning happens then the partition having the qdisk will run the cluster. More information can be get from cluster wiki (http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorum ) Thanks, Santosh ________________________________ From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet Sent: Monday, July 28, 2008 11:31 AM To: linux clustering Subject: [Linux-cluster] Why We create Priquorum Partition? Hello Guys, I want to setup Clustering and I read in manual I read We Need to Create minimum 10 MB to 200 MB priquorum disk partition.Whats That? Anybody who can explain why Cluster needs that space for? Thanks in advance, Ajeet This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.williams at redhat.com Mon Jul 28 07:17:21 2008 From: eric.williams at redhat.com (Eric Williams) Date: Mon, 28 Jul 2008 08:17:21 +0100 Subject: [Linux-cluster] Any other way to install latest patches on RHEL if there's no access to Internet In-Reply-To: <60f08e700807271858g12f46ceg47758d12ee93e317@mail.gmail.com> References: <60f08e700807271858g12f46ceg47758d12ee93e317@mail.gmail.com> Message-ID: <1217228634-sup-1844@eric.fab.redhat.com> Excerpts from sunhux G's message of Mon Jul 28 02:58:04 +0100 2008: > Hi, > > > If our RHEL servers (Ver 4.6 on Sun X platforms) do not > have access to Internet, I suppose it's not possible for us > to use up2date (or yum) to install latest patches. > About 90% true; there are workarounds: http://kbase.redhat.com/faq/FAQ_103_10415.shtm There's a similar procedure for up2date, but it's no longer in the kbase. Both procedures only get you updated to the release on the media you've downloaded, though, and do not inlcude the errata. > Is there any other way of getting the latest Redhat patches > installed? > Have you looked at RHN Proxy? https://rhn.redhat.com/rhn/help/proxy/rhn420/en/ch-example-topologies.jsp#s1-example-topologies-simple > Can we get the current installed patches on our servers & > then compare it against the latest list in Redhat, download > manually to a thumb drive & then install? > Red Hat offers RHN Satellite and Proxy for this particular usage profile. Other ways would probably work, but seem like a lot of trouble. So if you're time isn't important to you, check out yum-downloader in the yum-utils package, plus the RHN API documentation. You may be able to script something. > > Granting Internet access appears to be an issue currently > > > Thanks > U cya, eric -- Eric Williams GSS-EMEA 08:17:01 up 17 days, 19:52, 2 users, load average: 0.57, 0.75, 0.95 From ccaulfie at redhat.com Mon Jul 28 07:58:41 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 28 Jul 2008 08:58:41 +0100 Subject: [Linux-cluster] Multipathing, CLVM and GFS In-Reply-To: <4884DF96.4010908@amnh.org> References: <486BCFC4.9030203@amnh.org> <486C7FEF.8070300@redhat.com> <4884DF96.4010908@amnh.org> Message-ID: <488D7C31.8080307@redhat.com> Sajesh Singh wrote: > > > Christine Caulfield wrote: >> Sajesh Singh wrote: >>> Centos 4.6 >>> Cluster Suite >>> >>> I am currently running a 2 node GFS cluster. The storage is provided >>> via a fiber channel connection to the SAN. Each node currently has a >>> single FC connection to the SAN. I would like to migrate to using >>> dm-multipath with each node having dual fiber channel connections to >>> the SAN. Can I assume that CLVM is aware of the /dev/dm-# devices >>> that are used to access the multipathed devices? Are there any >>> gotchas that are associated with installing the >>> device-mapper-multipath software after the GFS cluster is up and >>> running? Are there any howtos available for this type of setup? >>> >> >> clvmd works fine with dm-multipath devices. You will probably have to >> edit /etc/lvm/lvm.conf to exclude the underlying /dev/sd devices to >> stop it getting confused though. >> >> You won't be able to do this with GFS mounted on the local node >> though, you'll have to umount it, setup dm-multipath, vgscan & >> remount. You CAN leave them mounted on other nodes while you do it. >> > Christine, > Should clvmd be restarted as well so that I can create new > volume groups? I have device-mapper-multipath setup, but if I try to run > pvcreate /dev/mapper/mpath1p1 the command just hangs without any errors. No, you shouldn't need to restart clvmd unless you have an old version and have added or removed PVs (which might be the case if you have changed multipath devices). pvcreate does very little in the way of cluster operations so it's probably worth checking it's not stuck writing to the dm device. (sorry for the delay in replying, I've been away). -- Chrissie From ccaulfie at redhat.com Mon Jul 28 08:06:28 2008 From: ccaulfie at redhat.com (Christine Caulfield) Date: Mon, 28 Jul 2008 09:06:28 +0100 Subject: [Linux-cluster] CS5 / ip addr instead of node name in cluster.conf ? In-Reply-To: <488488B5.1000501@bull.net> References: <488488B5.1000501@bull.net> Message-ID: <488D7E04.2020404@redhat.com> Alain Moulle wrote: > Hi > > I think I remember that with CS4, it was possible > to set IP addr instead of node name in cluster.conf > such as : > > tag, not the tag. -- Lon From garromo at us.ibm.com Mon Jul 28 20:39:00 2008 From: garromo at us.ibm.com (Gary Romo) Date: Mon, 28 Jul 2008 14:39:00 -0600 Subject: [Linux-cluster] remove a GFS file system Message-ID: What ALL should be done to properly remove a GFS file system? Thanks! Gary Romo -------------- next part -------------- An HTML attachment was scrubbed... URL: From garromo at us.ibm.com Mon Jul 28 22:21:42 2008 From: garromo at us.ibm.com (Gary Romo) Date: Mon, 28 Jul 2008 16:21:42 -0600 Subject: [Linux-cluster] remove a GFS file system In-Reply-To: Message-ID: Anything special, like a gfs_remove or something? Gary Romo Gary Romo/Denver/IBM at I BMUS To Sent by: linux-cluster at redhat.com linux-cluster-bou cc nces at redhat.com Subject [Linux-cluster] remove a GFS file 07/28/2008 02:39 system PM Please respond to linux clustering What ALL should be done to properly remove a GFS file system? Thanks! Gary Romo-- Linux-cluster mailing list Linux-cluster at redhat.com https://www.redhat.com/mailman/listinfo/linux-cluster -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pic19118.gif Type: image/gif Size: 1255 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From orkcu at yahoo.com Mon Jul 28 23:20:32 2008 From: orkcu at yahoo.com (Roger Pena Escobio) Date: Mon, 28 Jul 2008 19:20:32 -0400 (EDT) Subject: [Linux-cluster] Any other way to install latest patches on RHEL if there's no access to Internet In-Reply-To: <1217228634-sup-1844@eric.fab.redhat.com> Message-ID: <170540.75812.qm@web88302.mail.re4.yahoo.com> --- Eric Williams wrote: > Excerpts from sunhux G's message of Mon Jul 28 > 02:58:04 +0100 2008: > > Hi, > > > > > > If our RHEL servers (Ver 4.6 on Sun X platforms) > do not > > have access to Internet, I suppose it's not > possible for us > > to use up2date (or yum) to install latest patches. > > > > About 90% true; there are workarounds: > > http://kbase.redhat.com/faq/FAQ_103_10415.shtm > > There's a similar procedure for up2date, but it's no > longer in the kbase. > Both procedures only get you updated to the release > on the media you've > downloaded, though, and do not inlcude the errata. > > > Is there any other way of getting the latest > Redhat patches > > installed? > > > > Have you looked at RHN Proxy? > https://rhn.redhat.com/rhn/help/proxy/rhn420/en/ch-example-topologies.jsp#s1-example-topologies-simple > > > Can we get the current installed patches on our > servers & > > then compare it against the latest list in Redhat, > download > > manually to a thumb drive & then install? > > > > Red Hat offers RHN Satellite and Proxy for this > particular usage profile. > Other ways would probably work, but seem like a lot > of trouble. So if > you're time isn't important to you, check out > yum-downloader in the > yum-utils package, plus the RHN API documentation. > You may be able to > script something. the script is already done : mrepo http://dag.wieers.com/rpm/packages/mrepo/ work great in rhel4, in rhel5 use to have a problem with rhnlib and don't know if it fixed in cvs cu roger From david at craigon.co.uk Tue Jul 29 15:42:16 2008 From: david at craigon.co.uk (David J Craigon) Date: Tue, 29 Jul 2008 16:42:16 +0100 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 Message-ID: Hello, I've been given the job of getting a box of Dell blades (an M600) fencing correctly using the onboard DRACs (which Dell call an iDRAC). As far as I can tell, no one has done this- none of the existing fence_ scripts appear to do the trick. The command line interface is very different from a traditional Dell DRAC. Has anyone already done this? I'm going to write a fence script to do this, but I thought I'd check before I wasted my time. Thanks everyone, David From bkyoung at gmail.com Tue Jul 29 20:05:57 2008 From: bkyoung at gmail.com (Brandon Young) Date: Tue, 29 Jul 2008 15:05:57 -0500 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 In-Reply-To: References: Message-ID: <824ffea00807291305w4c542f2fr764ae54a29585897@mail.gmail.com> I use this method of fencing on my cluster. With RHCS, there is a supplied fencing script for DRAC cards. The trick is you have to enable telnet on the DRAC cards for the supplied script to work (you can either do this through the web interface, or install the Dell Management Software and issue some command I don't remember right now). Since the DRAC cards are (should be) on a private network, this is not too bad of a problem. On Tue, Jul 29, 2008 at 10:42 AM, David J Craigon wrote: > Hello, > > I've been given the job of getting a box of Dell blades (an M600) > fencing correctly using the onboard DRACs (which Dell call an iDRAC). > As far as I can tell, no one has done this- none of the existing > fence_ scripts appear to do the trick. The command line interface is > very different from a traditional Dell DRAC. > > Has anyone already done this? I'm going to write a fence script to do > this, but I thought I'd check before I wasted my time. > > Thanks everyone, > > David > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdinitto at redhat.com Tue Jul 29 20:09:27 2008 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Tue, 29 Jul 2008 22:09:27 +0200 (CEST) Subject: [Linux-cluster] ANNOUNCE: cluster.git repository moved to fedorahosted.org Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi everybody, the new repository is now available here: * git://git.fedorahosted.org/cluster.git (Anonymous) * ssh://git.fedorahosted.org/git/cluster.git (Authorized) * http://git.fedorahosted.org/git/cluster.git (Git Web) the old repository has been made read-only and it's not possible to push changes there anylonger. In order to switch to the new repository you can either perform a clone or change .git/config [remote "origin"] url= to match the new one. IMPORTANT NOTES: - - commit email format has changed. - - commit email header contains the same (and more) X-Git information that can be used for filtering. - - emails to cluster-cvs at sources.redhat.com are temporary disabled. cluster-devel at redhat.com is the only mailing list receiving notifications. This will change soon in such a way that _only_ cluster-cvs will see commit emails. - - the FC4 and STABLE branches are now frozen and cannot be changed. (You will receive an error from git push) - - all branches in the "git branch -r" list (as of today) cannot be deleted. (You will receive an error from git push) - - only branched in the "git branch -r" list (as of today) will generate emails to cluster-devel/cvs. This filter allows to push changes to private branches without flooding people inbox'es and mailing list. - - tags cannot be deleted. (You will receive an error from git push) Please notify me immediatly in case of problems with the new tree. Many thanks should go to all the sysadmins that have been working hard to assist our team in this transition. We seriously appreciate the effort. Happy hacking Fabio - -- I'm going to make him an offer he can't refuse. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) iQIVAwUBSI94/QgUGcMLQ3qJAQI4yw//YELbUxjJ/abuhW2w7a3MkyW8VBqVEVN5 2FVW9J6+5HkbjVxAf27FxYvNc9FEtvSnLRxB1SKyqwtMRGnNoRXI4pnm8EdezfV4 aFwOnYcWOpL6Fvr/Y6mQG+kKSDu0cXSr7M93tT3EMINZCbiAwhO7QTMdSaCzWE9Z c2W1dW6iU71IMkwsDdX5Au0Ft7n2Y3bA4RJMhzzfG8PL7pkLLw1jCdxE8eh2A2cD IPPsVjiE+RhUEegTr4aSdGiCtyx6c+7GEvTK+a5tveI6+EeEVVDOMBv+rC6dPKpj alSz8vve4QeYDRugrWD9zyIsOa5nrnjr4Cz7Bn8aE6GEkDWnhUs387m1n/U9FcAS 1fDk00FgF5xtpmxoBfdyZ9SNSb81Y2HHsI4AmM4eJ3XrguEEjbEFVFG5yOuTB37B F8tJlW0jwCw6GO75dHmSTFbR4KB2FPS4viE/EnFyurW4V/ON5MVo+FGJ571oIlGo 4MsmrArf5fn4hAi5FaPME8cROn6kzeTYM0f4Wg6ko4Gg6FLpqgqk8i6Fm1GkCy0R exSTmv9R4z+AhyL3fX3IzQgbhEuaPefB/L6yza2WffKcql7KKVmL8BNIEKNznp48 yb5+mmQt4QMPey+2bVZKBs3fWY8g0ZTmiMtRwAiW39EsobRSzp+lbBUoeOvakdwf yjxvJ4+T6Og= =VV2L -----END PGP SIGNATURE----- From fdinitto at redhat.com Wed Jul 30 04:38:25 2008 From: fdinitto at redhat.com (Fabio M. Di Nitto) Date: Wed, 30 Jul 2008 06:38:25 +0200 (CEST) Subject: [Linux-cluster] Re: ANNOUNCE: cluster.git repository moved to fedorahosted.org (UPDATE) In-Reply-To: References: Message-ID: On Tue, 29 Jul 2008, Fabio M. Di Nitto wrote: > - emails to cluster-cvs at sources.redhat.com are temporary disabled. > cluster-devel at redhat.com is the only mailing list receiving > notifications. This will change soon in such a way that _only_ > cluster-cvs will see commit emails. This issue has been fixed now. All emails should go to cluster-cvs mailing list only. A problem was found with the annotated tag generation that prevented emails from being generated. The following tags: * [new tag] cman_2_0_86 -> cman_2_0_86 * [new tag] cmirror-kernel_0_1_11 -> cmirror-kernel_0_1_11 * [new tag] cmirror-kernel_0_1_12 -> cmirror-kernel_0_1_12 * [new tag] cmirror-kernel_0_1_13 -> cmirror-kernel_0_1_13 * [new tag] cmirror_1_1_20 -> cmirror_1_1_20 * [new tag] cmirror_1_1_21 -> cmirror_1_1_21 * [new tag] cmirror_1_1_22 -> cmirror_1_1_22 * [new tag] fence_1_32_62 -> fence_1_32_62 * [new tag] fence_1_32_63 -> fence_1_32_63 did not hit any mailing list. The problem has been solved now by enforcing email filters only on branches and tracking branches. Fabio -- I'm going to make him an offer he can't refuse. From david at craigon.co.uk Wed Jul 30 10:12:55 2008 From: david at craigon.co.uk (David J Craigon) Date: Wed, 30 Jul 2008 11:12:55 +0100 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 In-Reply-To: <824ffea00807291305w4c542f2fr764ae54a29585897@mail.gmail.com> References: <824ffea00807291305w4c542f2fr764ae54a29585897@mail.gmail.com> Message-ID: Are you sure you are using an actual M600 blade chassis? On the ones I've got, they speak a different language after the telnet from other DRAC cards, hence the problem. 2008/7/29 Brandon Young : > I use this method of fencing on my cluster. With RHCS, there is a supplied > fencing script for DRAC cards. The trick is you have to enable telnet on > the DRAC cards for the supplied script to work (you can either do this > through the web interface, or install the Dell Management Software and issue > some command I don't remember right now). Since the DRAC cards are (should > be) on a private network, this is not too bad of a problem. > > On Tue, Jul 29, 2008 at 10:42 AM, David J Craigon > wrote: >> >> Hello, >> >> I've been given the job of getting a box of Dell blades (an M600) >> fencing correctly using the onboard DRACs (which Dell call an iDRAC). >> As far as I can tell, no one has done this- none of the existing >> fence_ scripts appear to do the trick. The command line interface is >> very different from a traditional Dell DRAC. >> >> Has anyone already done this? I'm going to write a fence script to do >> this, but I thought I'd check before I wasted my time. >> >> Thanks everyone, >> >> David >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > From linux at vfemail.net Wed Jul 30 11:52:41 2008 From: linux at vfemail.net (Alex) Date: Wed, 30 Jul 2008 14:52:41 +0300 Subject: [Linux-cluster] "Inc" column description/semnification Message-ID: <200807301452.41459.linux@vfemail.net> Hello, What does it mean "Inc" column in the output of the cman_tool nodes command? [root at rs2 ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 8 2008-07-30 11:03:12 192.168.113.5 2 M 4 2008-07-30 10:59:34 192.168.113.4 [root at rs2 ~]# Can anybody tell me what represent 4 and 8 in Inc coulmn? Reading manual (man cman_tool) i couldn't find any description. Just reference to it! Regards, Alx From mgrac at redhat.com Wed Jul 30 12:48:30 2008 From: mgrac at redhat.com (Marek 'marx' Grac) Date: Wed, 30 Jul 2008 14:48:30 +0200 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 In-Reply-To: References: Message-ID: <4890631E.2060305@redhat.com> David J Craigon wrote: > Hello, > > I've been given the job of getting a box of Dell blades (an M600) > fencing correctly using the onboard DRACs (which Dell call an iDRAC). > As far as I can tell, no one has done this- none of the existing > fence_ scripts appear to do the trick. The command line interface is > very different from a traditional Dell DRAC. > > Has anyone already done this? I'm going to write a fence script to do > this, but I thought I'd check before I wasted my time. > > If it is possible please use an exisiting infrastructure for fencing agents (branch MASTER). You can just modify get/set status in one of the existing (close enough) agents. If you can give me an access to this device for few hours, I have no problem to write it. If you will have any questing about writing fencing agents don't bother and ask for help. marx, Fence Master :) -- Marek Grac Red Hat Czech s.r.o. From david at craigon.co.uk Wed Jul 30 15:37:51 2008 From: david at craigon.co.uk (David J Craigon) Date: Wed, 30 Jul 2008 16:37:51 +0100 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 In-Reply-To: <4890631E.2060305@redhat.com> References: <4890631E.2060305@redhat.com> Message-ID: It turns out that the right way to do this is use what Dell call "CMC"- a device that manages all the blades, not just one (just like the DRAC/MC). This is like a mix of the Dell DRAC/MC and DRAC 5 in fence_drac. I've written a patch that adds support for the CMC to fence_drac. This is my first patch ever using git, so hopefully it's good for you. This has been tested on a CMC, but it also changes the code for a Dell 1950. I'm going to get a 1950 and test it tomorrow. Feedback welcomed! David --- fence/agents/drac/fence_drac.pl | 36 +++++++++++++++++++++++++++++------- 1 files changed, 29 insertions(+), 7 deletions(-) diff --git a/fence/agents/drac/fence_drac.pl b/fence/agents/drac/fence_drac.pl index f199814..f96ef22 100644 --- a/fence/agents/drac/fence_drac.pl +++ b/fence/agents/drac/fence_drac.pl @@ -38,6 +38,7 @@ my $DRAC_VERSION_MC = 'DRAC/MC'; my $DRAC_VERSION_4I = 'DRAC 4/I'; my $DRAC_VERSION_4P = 'DRAC 4/P'; my $DRAC_VERSION_5 = 'DRAC 5'; +my $DRAC_VERSION_CMC = 'CMC'; my $PWR_CMD_SUCCESS = "/^OK/"; my $PWR_CMD_SUCCESS_DRAC5 = "/^Server power operation successful$/"; @@ -192,10 +193,15 @@ sub login # DRAC5 prints version controller version info # only after you've logged in. if ($drac_version eq $DRAC_VERSION_UNKNOWN) { - if ($t->waitfor(Match => "/.*\($DRAC_VERSION_5\)/m")) { + + if (my ($prematch,$match)=$t->waitfor(Match => "/.*(\($DRAC_VERSION_5\)|$DRAC_VERSION_CMC)/m")) { + if ($match=~/$DRAC_VERSION_CMC/) { + $drac_version = $DRAC_VERSION_CMC; + } else { $drac_version = $DRAC_VERSION_5; + } $cmd_prompt = "/\\\$ /"; - $PWR_CMD_SUCCESS = $PWR_CMD_SUCCESS_DRAC5; + $PWR_CMD_SUCCESS = $PWR_CMD_SUCCESS_DRAC5; } else { print "WARNING: unable to detect DRAC version '$_'\n"; } @@ -228,8 +234,10 @@ sub set_power_status } elsif ($drac_version eq $DRAC_VERSION_5) { $cmd = "racadm serveraction $svr_action"; - } else - { + } + elsif ($drac_version eq $DRAC_VERSION_CMC) { + $cmd = "racadm serveraction -m $modulename $svr_action"; + } else { $cmd = "serveraction -d 0 $svr_action"; } @@ -271,6 +279,11 @@ sub set_power_status } } fail "failed: unexpected response: '$err'" if defined $err; + + # on M600 blade systems, after power on or power off, status takes a couple of seconds to report correctly. Wait here before checking status again + sleep 5; + + } @@ -285,6 +298,8 @@ sub get_power_status if ($drac_version eq $DRAC_VERSION_5) { $cmd = "racadm serveraction powerstatus"; + } elsif ($drac_version eq $DRAC_VERSION_CMC) { + $cmd = "racadm serveraction powerstatus -m $modulename"; } else { $cmd = "getmodinfo"; } @@ -306,7 +321,7 @@ sub get_power_status fail "failed: unkown dialog exception: '$_'" unless (/^$cmd$/); - if ($drac_version ne $DRAC_VERSION_5) { + if ($drac_version ne $DRAC_VERSION_5 && $drac_version ne $DRAC_VERSION_CMC) { #Expect: # # # 1 ----> chassis Present ON Normal CQXYV61 @@ -335,6 +350,11 @@ sub get_power_status if(m/^Server power status: (\w+)/) { $status = lc($1); } + } + elsif ($drac_version eq $DRAC_VERSION_CMC) { + if(m/^(\w+)/) { + $status = lc($1); + } } else { my ($group,$arrow,$module,$presence,$pwrstate,$health, $svctag,$junk) = split /\s+/; @@ -364,7 +384,8 @@ sub get_power_status } $_=$status; - if(/^(on|off)$/i) + + if (/^(on|off)$/i) { # valid power states } @@ -440,6 +461,7 @@ sub do_action } set_power_status on; + fail "failed: $_" unless wait_power_status on; msg "success: powered on"; @@ -641,7 +663,7 @@ if ($drac_version eq $DRAC_VERSION_III_XT) fail "failed: option 'modulename' not compatilble with DRAC version '$drac_version'" if defined $modulename; } -elsif ($drac_version eq $DRAC_VERSION_MC) +elsif ($drac_version eq $DRAC_VERSION_MC || $drac_version eq $DRAC_VERSION_CMC) { fail "failed: option 'modulename' required for DRAC version '$drac_version'" unless defined $modulename; -- 1.5.5.1 >From 2899ae4468a69b89346cafba13022a9b214404f2 Mon Sep 17 00:00:00 2001 From: David J Craigon Date: Wed, 30 Jul 2008 16:34:24 +0100 Subject: Add a comment to state the CMC version this script works on --- fence/agents/drac/fence_drac.pl | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/fence/agents/drac/fence_drac.pl b/fence/agents/drac/fence_drac.pl index f96ef22..11cc771 100644 --- a/fence/agents/drac/fence_drac.pl +++ b/fence/agents/drac/fence_drac.pl @@ -13,6 +13,7 @@ # PowerEdge 1850 DRAC 4/I 1.35 (Build 09.27) # PowerEdge 1850 DRAC 4/I 1.40 (Build 08.24) # PowerEdge 1950 DRAC 5 1.0 (Build 06.05.12) +# PowerEdge M600 CMC 1.01.A05.200803072107 # use Getopt::Std; -- 1.5.5.1 2008/7/30 Marek 'marx' Grac : > David J Craigon wrote: >> >> Hello, >> >> I've been given the job of getting a box of Dell blades (an M600) >> fencing correctly using the onboard DRACs (which Dell call an iDRAC). >> As far as I can tell, no one has done this- none of the existing >> fence_ scripts appear to do the trick. The command line interface is >> very different from a traditional Dell DRAC. >> >> Has anyone already done this? I'm going to write a fence script to do >> this, but I thought I'd check before I wasted my time. >> >> > > If it is possible please use an exisiting infrastructure for fencing agents > (branch MASTER). You can just modify get/set status in one of the existing > (close enough) agents. If you can give me an access to this device for few > hours, I have no problem to write it. If you will have any questing about > writing fencing agents don't bother and ask for help. > > marx, > Fence Master :) > > -- > Marek Grac > Red Hat Czech s.r.o. > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > From lhh at redhat.com Wed Jul 30 17:34:06 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 30 Jul 2008 13:34:06 -0400 Subject: [Linux-cluster] remove a GFS file system In-Reply-To: References: Message-ID: <1217439246.30587.192.camel@ayanami> On Mon, 2008-07-28 at 14:39 -0600, Gary Romo wrote: > What ALL should be done to properly remove a GFS file system? > Thanks! umount /mountpoint ? After that, you can use 'dd' to nuke it after you umount it on all nodes. -- Lon From lhh at redhat.com Wed Jul 30 17:36:48 2008 From: lhh at redhat.com (Lon Hohberger) Date: Wed, 30 Jul 2008 13:36:48 -0400 Subject: [Linux-cluster] "Inc" column description/semnification In-Reply-To: <200807301452.41459.linux@vfemail.net> References: <200807301452.41459.linux@vfemail.net> Message-ID: <1217439408.30587.195.camel@ayanami> On Wed, 2008-07-30 at 14:52 +0300, Alex wrote: > Hello, > > What does it mean "Inc" column in the output of the cman_tool nodes command? > > [root at rs2 ~]# cman_tool nodes > Node Sts Inc Joined Name > 1 M 8 2008-07-30 11:03:12 192.168.113.5 > 2 M 4 2008-07-30 10:59:34 192.168.113.4 > [root at rs2 ~]# > > Can anybody tell me what represent 4 and 8 in Inc coulmn? Local incarnation # for the node, if I recall correctly. They usually do not match cluster-wide. -- Lon From jparsons at redhat.com Wed Jul 30 20:56:30 2008 From: jparsons at redhat.com (jim parsons) Date: Wed, 30 Jul 2008 16:56:30 -0400 Subject: [Linux-cluster] Fencing using iDRAC/ Dell M600 In-Reply-To: References: <4890631E.2060305@redhat.com> Message-ID: <1217451390.3371.3.camel@localhost.localdomain> On Wed, 2008-07-30 at 16:37 +0100, David J Craigon wrote: > It turns out that the right way to do this is use what Dell call > "CMC"- a device that manages all the blades, not just one (just like > the DRAC/MC). This is like a mix of the Dell DRAC/MC and DRAC 5 in > fence_drac. > > I've written a patch that adds support for the CMC to fence_drac. This > is my first patch ever using git, so hopefully it's good for you. > > This has been tested on a CMC, but it also changes the code for a Dell > 1950. I'm going to get a 1950 and test it tomorrow. > > Feedback welcomed! THANK YOU. SINCERELY. Please update us with test results. If no regressions pop up, this is going into the agent ASAP. THANK YOU. :) -Jim, who often feels fenced in > > David > > --- > fence/agents/drac/fence_drac.pl | 36 +++++++++++++++++++++++++++++------- > 1 files changed, 29 insertions(+), 7 deletions(-) > > diff --git a/fence/agents/drac/fence_drac.pl b/fence/agents/drac/fence_drac.pl > index f199814..f96ef22 100644 > --- a/fence/agents/drac/fence_drac.pl > +++ b/fence/agents/drac/fence_drac.pl > @@ -38,6 +38,7 @@ my $DRAC_VERSION_MC = 'DRAC/MC'; > my $DRAC_VERSION_4I = 'DRAC 4/I'; > my $DRAC_VERSION_4P = 'DRAC 4/P'; > my $DRAC_VERSION_5 = 'DRAC 5'; > +my $DRAC_VERSION_CMC = 'CMC'; > > my $PWR_CMD_SUCCESS = "/^OK/"; > my $PWR_CMD_SUCCESS_DRAC5 = "/^Server power operation successful$/"; > @@ -192,10 +193,15 @@ sub login > # DRAC5 prints version controller version info > # only after you've logged in. > if ($drac_version eq $DRAC_VERSION_UNKNOWN) { > - if ($t->waitfor(Match => "/.*\($DRAC_VERSION_5\)/m")) { > + > + if (my ($prematch,$match)=$t->waitfor(Match => > "/.*(\($DRAC_VERSION_5\)|$DRAC_VERSION_CMC)/m")) { > + if ($match=~/$DRAC_VERSION_CMC/) { > + $drac_version = $DRAC_VERSION_CMC; > + } else { > $drac_version = $DRAC_VERSION_5; > + } > $cmd_prompt = "/\\\$ /"; > - $PWR_CMD_SUCCESS = $PWR_CMD_SUCCESS_DRAC5; > + $PWR_CMD_SUCCESS = $PWR_CMD_SUCCESS_DRAC5; > } else { > print "WARNING: unable to detect DRAC version '$_'\n"; > } > @@ -228,8 +234,10 @@ sub set_power_status > } > elsif ($drac_version eq $DRAC_VERSION_5) { > $cmd = "racadm serveraction $svr_action"; > - } else > - { > + } > + elsif ($drac_version eq $DRAC_VERSION_CMC) { > + $cmd = "racadm serveraction -m $modulename $svr_action"; > + } else { > $cmd = "serveraction -d 0 $svr_action"; > } > > @@ -271,6 +279,11 @@ sub set_power_status > } > } > fail "failed: unexpected response: '$err'" if defined $err; > + > + # on M600 blade systems, after power on or power off, status takes a > couple of seconds to report correctly. Wait here before checking > status again > + sleep 5; > + > + > } > > > @@ -285,6 +298,8 @@ sub get_power_status > > if ($drac_version eq $DRAC_VERSION_5) { > $cmd = "racadm serveraction powerstatus"; > + } elsif ($drac_version eq $DRAC_VERSION_CMC) { > + $cmd = "racadm serveraction powerstatus -m $modulename"; > } else { > $cmd = "getmodinfo"; > } > @@ -306,7 +321,7 @@ sub get_power_status > > fail "failed: unkown dialog exception: '$_'" unless (/^$cmd$/); > > - if ($drac_version ne $DRAC_VERSION_5) { > + if ($drac_version ne $DRAC_VERSION_5 && $drac_version ne $DRAC_VERSION_CMC) { > #Expect: > # # > # 1 ----> chassis Present ON Normal CQXYV61 > @@ -335,6 +350,11 @@ sub get_power_status > if(m/^Server power status: (\w+)/) { > $status = lc($1); > } > + } > + elsif ($drac_version eq $DRAC_VERSION_CMC) { > + if(m/^(\w+)/) { > + $status = lc($1); > + } > } else { > my ($group,$arrow,$module,$presence,$pwrstate,$health, > $svctag,$junk) = split /\s+/; > @@ -364,7 +384,8 @@ sub get_power_status > } > > $_=$status; > - if(/^(on|off)$/i) > + > + if (/^(on|off)$/i) > { > # valid power states > } > @@ -440,6 +461,7 @@ sub do_action > } > > set_power_status on; > + > fail "failed: $_" unless wait_power_status on; > > msg "success: powered on"; > @@ -641,7 +663,7 @@ if ($drac_version eq $DRAC_VERSION_III_XT) > fail "failed: option 'modulename' not compatilble with DRAC version > '$drac_version'" > if defined $modulename; > } > -elsif ($drac_version eq $DRAC_VERSION_MC) > +elsif ($drac_version eq $DRAC_VERSION_MC || $drac_version eq $DRAC_VERSION_CMC) > { > fail "failed: option 'modulename' required for DRAC version '$drac_version'" > unless defined $modulename; > -- > 1.5.5.1 > > > >From 2899ae4468a69b89346cafba13022a9b214404f2 Mon Sep 17 00:00:00 2001 > From: David J Craigon > Date: Wed, 30 Jul 2008 16:34:24 +0100 > Subject: Add a comment to state the CMC version this script works on > > --- > fence/agents/drac/fence_drac.pl | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/fence/agents/drac/fence_drac.pl b/fence/agents/drac/fence_drac.pl > index f96ef22..11cc771 100644 > --- a/fence/agents/drac/fence_drac.pl > +++ b/fence/agents/drac/fence_drac.pl > @@ -13,6 +13,7 @@ > # PowerEdge 1850 DRAC 4/I 1.35 (Build 09.27) > # PowerEdge 1850 DRAC 4/I 1.40 (Build 08.24) > # PowerEdge 1950 DRAC 5 1.0 (Build 06.05.12) > +# PowerEdge M600 CMC 1.01.A05.200803072107 > # > > use Getopt::Std; From linux at vfemail.net Thu Jul 31 07:04:23 2008 From: linux at vfemail.net (Alex) Date: Thu, 31 Jul 2008 10:04:23 +0300 Subject: [Linux-cluster] "Inc" column description/semnification In-Reply-To: <1217439408.30587.195.camel@ayanami> References: <200807301452.41459.linux@vfemail.net> <1217439408.30587.195.camel@ayanami> Message-ID: <200807311004.23788.linux@vfemail.net> On Wednesday 30 July 2008 20:36, Lon Hohberger wrote: > On Wed, 2008-07-30 at 14:52 +0300, Alex wrote: > > Hello, > > > > What does it mean "Inc" column in the output of the cman_tool nodes > > command? > > > > [root at rs2 ~]# cman_tool nodes > > Node Sts Inc Joined Name > > 1 M 8 2008-07-30 11:03:12 192.168.113.5 > > 2 M 4 2008-07-30 10:59:34 192.168.113.4 > > [root at rs2 ~]# > > > > Can anybody tell me what represent 4 and 8 in Inc coulmn? > > Local incarnation # for the node, if I recall correctly. They usually > do not match cluster-wide. Because we know what is its name, let me ask you about Inc signification, how can be interpreted and what represent 8 and 4 in above column... 8m, 8pps, 8kbps, 8kv, womans, mans, aliens? In manual and documentation is absolutely missing any info about Inc column! And another question: why numbers in Inc column is changing everytime a node is rebooted and remain constant till next reboot? [root at rs2 ~]# cman_tool nodes Node Sts Inc Joined Name 1 M 24 2008-07-30 13:11:23 192.168.113.5 2 M 12 2008-07-30 13:02:48 192.168.113.4 [root at rs2 ~]# after a reboot, INC contain 24 and 12 values (last values as you see above, was 8 and 4). what represent? Regards, Alx From linux at vfemail.net Thu Jul 31 08:43:22 2008 From: linux at vfemail.net (Alex) Date: Thu, 31 Jul 2008 11:43:22 +0300 Subject: [Linux-cluster] how to mount a gfs2 volume on all our real webservers in /var/www/html Message-ID: <200807311143.22407.linux@vfemail.net> Hello all, I have 3 real http servers running on centos-5.2 (rs1=192.168.113.3, rs2=192.168.113.4, rs3=192.168.113.5) on which i want to mount some gfs2 volumes (mylv1 as /var/www/html and mylv2 as /var/www/cgi-bin). In my present setup, rs1 is still missing (will be added later). Using conga, i configured rs2 and rs3 to be part of "httpcluster". On all real webservers, mylv1 and mylv2 volumes are accessible: [root at rs2 ~]# lvscan ACTIVE '/dev/myvg2/mylv2' [48.63 GB] inherit ACTIVE '/dev/myvg1/mylv1' [48.63 GB] inherit [root at rs2 ~]# [root at rs3 ~]# lvscan ACTIVE '/dev/myvg2/mylv2' [48.63 GB] inherit ACTIVE '/dev/myvg1/mylv1' [48.63 GB] inherit [root at rs3 ~]# Using Conga and this howto, I added: http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/en-US/RHEL510/Cluster_Administration/s1-apache-inshttpd-CA.html 2 resources: - httpd_script (type script, full_path_to_script=/etc/rc.d/init.d/httpd) - docroot_gfs_mp (type gfs, mount_point=/var/www/html, device=/dev/myvg1/mylv1) 2 services: - http_service (which is linked to http_script resource and is responsible to startup httpd servers aon all our nodes) - docroot_mount (which is linked to docroot_gfs_mp and is responsible to mount mylv1 on all our webservers in /var/www/html) The main problem is that: httpd service is started only on one node (now on rs2=192.168.113.4), and mylv1 is mounted in /var/www/html on the same node. If this node become unusable (is rebooted), httpd service is started on rs3, and mylv1 is mounted there. What should be changed in my cluster.conf file, in order to have all httpd servers started by "httpcluster" and mylv1 to be mounted in /var/www/html on all our webservers? Here comes my present cluster.conf file: [root at rs2 ~]# cat /etc/cluster/cluster.conf