From sghosh at redhat.com Tue Jul 1 00:02:43 2008
From: sghosh at redhat.com (Subhendu Ghosh)
Date: Mon, 30 Jun 2008 20:02:43 -0400
Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9
In-Reply-To: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com>
References: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com>
Message-ID: <48697423.2030305@redhat.com>
If you are using ocfs2, then ASM and ASMlib are not required. ASM uses raw
disks and ASMlib provides ASM a way to easily recognize said disks.
cheers
Subhendu
Tina Soles wrote:
> Hello,
>
>
>
> I am attempting to setup an Oracle RAC using these instructions:
> http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi_2.html#17
>
>
>
> I am running Fedora 9 with kernel = 2.6.25-14.fc9.i686
>
>
>
> I realize this is probably an ?unsupported? version, but it?s the only
> version that I could get to work with my firewire setup, so I cannot
> change the kernel.
>
> ocfs2 is up and running, and now I need to install ASMLib 2.0, but it
> appears that there is no rpm distribution for this kernel. Therefore, I
> am attempting to build my own, from the source files,
> oracleasm-2.0.4.tar.gz. After unzipping and untarring, I run
> ./configure and it seems to run fine (see below), but when I try to run
> make install it bombs with an error no rule to make target
> `oracleasm.ko', needed by `install-oracleasm'. Stop.
>
>
>
> I don?t have any experience building rpms from source, so any explicit
> instructions you can give me would be much appreciated. Also, does this
> source file contain everything I need in order to build the kernel
> driver, userspace library, and driver support files, or do I need
> separate source files for those? Please forgive my ignorance, as I am
> new to this.
>
>
>
> Thanks in advance for any help you can give me.
>
>
>
> Regards,
>
> Tina
>
>
>
> # ./configure
>
> checking build system type... i686-pc-linux-gnu
>
> checking host system type... i686-pc-linux-gnu
>
> checking for gcc... gcc
>
> checking for C compiler default output file name... a.out
>
> checking whether the C compiler works... yes
>
> checking whether we are cross compiling... no
>
> checking for suffix of executables...
>
> checking for suffix of object files... o
>
> checking whether we are using the GNU C compiler... yes
>
> checking whether gcc accepts -g... yes
>
> checking for gcc option to accept ANSI C... none needed
>
> checking how to run the C preprocessor... gcc -E
>
> checking for a BSD-compatible install... /usr/bin/install -c
>
> checking whether ln -s works... yes
>
> checking for ranlib... ranlib
>
> checking for ar... /usr/bin/ar
>
> checking for egrep... grep -E
>
> checking for ANSI C header files... yes
>
> checking for an ANSI C-conforming const... yes
>
> checking for sys/types.h... yes
>
> checking for sys/stat.h... yes
>
> checking for stdlib.h... yes
>
> checking for string.h... yes
>
> checking for memory.h... yes
>
> checking for strings.h... yes
>
> checking for inttypes.h... yes
>
> checking for stdint.h... yes
>
> checking for unistd.h... yes
>
> checking for unsigned long... yes
>
> checking size of unsigned long... 4
>
> checking for vendor... not found
>
> checking for vendor kernel... not supported
>
> checking for directory with kernel build tree...
> /lib/modules/2.6.25-14.fc9.i686/build
>
> checking for kernel version... 2.6.25-14.fc9.i686
>
> checking for capabilities mask in backing_dev_info... yes
>
> checking for vfsmount in ->get_sb() helpers... yes
>
> checking for for mutex API... yes
>
> checking for for i_private... yes
>
> checking for for i_blksize... no
>
> configure: creating ./config.status
>
> config.status: creating Config.make
>
> config.status: creating include/linux/oracleasm/module_version.h
>
> config.status: creating vendor/sles9/oracleasm.spec-generic
>
> config.status: creating vendor/rhel4/oracleasm.spec-generic
>
> config.status: creating vendor/fc6/oracleasm.spec-generic
>
> config.status: creating vendor/sles10/oracleasm.spec-generic
>
> config.status: creating vendor/rhel5/oracleasm.spec-generic
>
> config.status: creating vendor/common/oracleasm-headers.spec-generic
>
>
>
> # make install
>
> make -C include install
>
> make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include'
>
> make -C linux install
>
> make[2]: Entering directory
> `/root/rpms/source/oracleasm-2.0.4/include/linux'
>
> make -C oracleasm install
>
> make[3]: Entering directory
> `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm'
>
> /bin/sh ../../../mkinstalldirs /usr/local/include/linux/oracleasm
>
> for hdr in abi.h abi_compat.h disk.h error.h manager.h manager_compat.h
> kernel.h compat32.h module_version.h; do \
>
> /usr/bin/install -c -m 644 $hdr
> /usr/local/include/linux/oracleasm/$hdr; \
>
> done
>
> make[3]: Leaving directory
> `/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm'
>
> make[2]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include/linux'
>
> make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include'
>
> make -C kernel install
>
> make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/kernel'
>
> make[1]: *** No rule to make target `oracleasm.ko', needed by
> `install-oracleasm'. Stop.
>
> make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/kernel'
>
> make: *** [kernel-install] Error 2
>
>
>
> Tina Soles
>
> Senior Analyst
>
>
>
> STR Software
>
>
>
> 11505 Allecingie Parkway
> Richmond, VA 23235
> email. tina.soles at strsoftware.com
>
> phone. 804.897.1600
> fax. 804.897.1638
>
> web. www.strsoftware.com
>
>
>
>
> ------------------------------------------------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
--
Subhendu Ghosh
Solutions Architect
Red Hat
From andreas.schneider at f-it.biz Tue Jul 1 08:02:18 2008
From: andreas.schneider at f-it.biz (Andreas Schneider)
Date: Tue, 1 Jul 2008 10:02:18 +0200
Subject: [Linux-cluster] inconsistend volume group after pvmove
Message-ID: <003701c8db50$c8fcb390$5af61ab0$@schneider@f-it.biz>
Hello,
This is our setup: We have 3 Linux servers (2.6.18 Centos 5), clustered,
with a clvmd running one ?big? volume group (15 SCSI disks a 69,9 GB).
After we got an hardware I/O error on one disk out gfs filesystem began to
loop.
So we stopped all services and we determined the corrupted disk (/dev/sdh)
and my intention was to do the following:
- pvmove /dev/sdh
- vgreduce my_volumegroup /dev/sdh
- do an intensive hardware check on the volume
But: that?s what happened during pvmove ?v /dev/sdh:
.
/dev/sdh: Moved: 78,6%
/dev/sdh: Moved: 79,1%
/dev/sdh: Moved: 79,7%
/dev/sdh: Moved: 80,0%
Updating volume group metadata
Creating volume group backup "/etc/lvm/backup/myvol_vg" (seqno 46).
Error locking on node server1: device-mapper: reload ioctl failed: Das
Argument ist ung?ltig
Unable to reactivate logical volume "pvmove0"
ABORTING: Segment progression failed.
Removing temporary pvmove LV
Writing out final volume group after pvmove
Creating volume group backup "/etc/lvm/backup/myvol_vg" (seqno 48).
[root at hpserver1 ~]# pvscan
PV /dev/cciss/c0d0p2 VG VolGroup00 lvm2 [33,81 GB / 0 free]
PV /dev/sda VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sdb VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sdc VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sdd VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sde VG fit_vg lvm2 [66,75 GB / 46,75 GB free]
PV /dev/sdf VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sdg VG fit_vg lvm2 [68,36 GB / 0 free]
PV /dev/sdh VG fit_vg lvm2 [68,36 GB / 58,36 GB free]
PV /dev/sdj VG fit_vg lvm2 [68,36 GB / 54,99 GB free]
PV /dev/sdi VG fit_vg lvm2 [68,36 GB / 15,09 GB free]
PV /dev/sdk1 VG fit_vg lvm2 [68,36 GB / 55,09 GB free]
Total: 12 [784,20 GB] / in use: 12 [784,20 GB] / in no VG: 0 [0 ]
That sounded bad, and I didn?t have any idea what to do, but read, that
pvmove can start at the point it was, so I started pvmove againg and now
pvmove could move all data.
pvscan and vgscan -vvv showed me, that all data were moved from the /dev/sdh
volume to the other volumes.
To be sure I restarted my cluster nodes, but they encountered problems
mounting the gfs filesystems.
I got this error:
[root at server1 ~]# /etc/init.d/clvmd stop
Deactivating VG myvol_vg: Volume group "myvol_vg" inconsistent
WARNING: Inconsistent metadata found for VG myvol_vg - updating to use
version 148
0 logical volume(s) in volume group "myvol_vg" now active
[ OK ]
Stopping clvm: [ OK ]
[root at server1 ~]# /etc/init.d/clvmd start
Starting clvmd: [ OK ]
Activating VGs: 2 logical volume(s) in volume group "VolGroup00" now
active
Volume group "myvol_vg" inconsistent
WARNING: Inconsistent metadata found for VG myvol_vg - updating to use
version 151
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtuFZZe8QKoX8sVA0XRTNoDQVWVftk8cSa
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtqDfFtrJTFTGuju8nNjwtCdPGnzP3hh8k
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtc22hBY40phdVvVdFBFX28PvfF7JrlIYz
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtWfJ1EqXJ309gO3Gx0ZvpNekrmHFo9u2V
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBtCP6czghnQFEjNdv9DF6bsUmnK3eJ5vKp
Error locking on node server1: Volume group for uuid not found:
tGRfaK5aW00pFRXcLtrdHAw5a4GNDVBt0KNlnblpwOfcnqIjk4GJ662dxOsL70GF
0 logical volume(s) in volume group "myvol_vg" now active
[ OK ]
As I take a look at it, these 6 volumes are exactly the LVs which should be
found and where all datas are stored.
The next step was in the beginning step by step and in the end stupid try
and error.
This was one of the first actions:
[root at hpserver1 ~]# vgreduce --removemissing myvol_vg
Logging initialised at Tue Jul 1 10:00:52 2008
Set umask to 0077
Finding volume group "myvol_vg"
Wiping cache of LVM-capable devices
WARNING: Inconsistent metadata found for VG myvol_vg - updating to use
version 229
Volume group "myvol_vg" is already consistent
We tried to deactivate the volume via vgchange ?n y myvol_vg, we tried to
?removemissing? and sadly after a few concurrent tries (dmsetup info ?c,
dmsetup mknodes and vgchange ?n y myvol_vg) we can access our LVs, but we
still get this message and we don?t know why:
Volume group "myvol_vg" inconsistent
WARNING: Inconsistent metadata found for VG myvol_vg - updating to use
version 228
I?m a little bit worried about our data,
Regards
Andreas
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stevan.colaco at gmail.com Tue Jul 1 09:06:55 2008
From: stevan.colaco at gmail.com (Stevan Colaco)
Date: Tue, 1 Jul 2008 12:06:55 +0300
Subject: [Linux-cluster] Cluster doesn't come up while rebooting
Message-ID: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com>
Hello All,
I need your help for one issue i am facing .
OS: RHEL4 ES Update 6 64bit
I have a deployment where we have 2 + 1 cluster (2 active and one
passive). I have a service which is to be failed over but faced issues
when i rebooted all 3 servers. Services got disabled. But when i use
clusvsadm to manually enable service it works. Here are the logs : -
Jun 25 11:13:15 mb1 clurgmgrd[14825]: Resource Group Manager Starting
Jun 25 11:13:15 mb1 clurgmgrd[14825]: Loading Service Data
Jun 25 11:13:17 mb1 clurgmgrd[14825]: Initializing Services
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: /dev/sdh1 is not mounted
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-BACKUP with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-BACKUP returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-STORE with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-STORE returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-DBDATA with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-DBDATA returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-CONF with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CONF returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-REDOLOG with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-REDOLOG returned
2 (invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-INDEX with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-INDEX returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-LOG with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-LOG returned 2
(invalid argument(s))
Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
LABEL=MB2-ZIMBRA-CLUST with a real device
Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CLUSTER returned
2 (invalid argument(s))
Jun 25 11:13:22 mb1 clurgmgrd: [14825]: /dev/sdg1 is not mounted
Jun 25 11:13:27 mb1 clurgmgrd: [14825]: /dev/sdf1 is not mounted
Jun 25 11:13:33 mb1 clurgmgrd: [14825]: /dev/sde1 is not mounted
Jun 25 11:13:38 mb1 clurgmgrd: [14825]: /dev/sdd1 is not mounted
Jun 25 11:13:43 mb1 clurgmgrd: [14825]: /dev/sdc1 is not mounted
Jun 25 11:13:45 mb1 rgmanager: clurgmgrd startup failed
Jun 25 11:13:48 mb1 clurgmgrd: [14825]: /dev/sdb1 is not mounted
Jun 25 11:13:53 mb1 clurgmgrd: [14825]: /dev/sda1 is not mounted
Jun 25 11:13:58 mb1 clurgmgrd[14825]: Services Initialized
Jun 25 11:14:01 mb1 clurgmgrd[14825]: Logged in SG "usrm::manager"
Jun 25 11:14:01 mb1 clurgmgrd[14825]: Magma Event: Membership Change
Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: Local UP
Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: mbstandby.ku.edu.kw UP
Jun 25 11:14:03 mb1 clurgmgrd[14825]: Magma Event: Membership Change
Jun 25 11:14:03 mb1 clurgmgrd[14825]: State change: mb2.ku.edu.kw UP
MB2 server Logs
Jun 25 11:13:40 mb2 clurgmgrd[14776]: Resource Group Manager Starting
Jun 25 11:13:40 mb2 clurgmgrd[14776]: Loading Service Data
Jun 25 11:13:41 mb2 clurgmgrd[14776]: Initializing Services
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-DBDATA with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-DBDATA returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-INDEX with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-INDEX returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-LOG with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-LOG returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-CONF with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CONF returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: /dev/sdh1 is not mounted
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-BACKUP with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-BACKUP returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-REDOLOG with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-REDOLOG returned
2 (invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-STORE with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-STORE returned 2
(invalid argument(s))
Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
LABEL=MB1-ZIMBRA-CLUST with a real device
Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CLUSTER returned
2 (invalid argument(s))
Jun 25 11:13:46 mb2 clurgmgrd: [14776]: /dev/sdf1 is not mounted
Jun 25 11:13:52 mb2 clurgmgrd: [14776]: /dev/sdg1 is not mounted
Jun 25 11:13:57 mb2 clurgmgrd: [14776]: /dev/sde1 is not mounted
Jun 25 11:14:02 mb2 clurgmgrd: [14776]: /dev/sdd1 is not mounted
Jun 25 11:14:07 mb2 clurgmgrd: [14776]: /dev/sdc1 is not mounted
Jun 25 11:14:10 mb2 rgmanager: clurgmgrd startup failed
Jun 25 11:14:12 mb2 clurgmgrd: [14776]: /dev/sdb1 is not mounted
Jun 25 11:14:18 mb2 clurgmgrd: [14776]: /dev/sda1 is not mounted
Jun 25 11:14:23 mb2 clurgmgrd[14776]: Services Initialized
Jun 25 11:14:25 mb2 clurgmgrd[14776]: Logged in SG "usrm::manager"
Jun 25 11:14:25 mb2 clurgmgrd[14776]: Magma Event: Membership Change
Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: Local UP
Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mb1.ku.edu.kw UP
Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mbstandby.ku.edu.kw UP
MBSTANDBY LOGS
Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Resource Group Manager Starting
Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Loading Service Data
Jun 25 11:13:27 mbstandby clurgmgrd[15850]: Initializing Services
Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdl1 is not mounted
Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdp1 is not mounted
Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdk1 is not mounted
Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdn1 is not mounted
Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdj1 is not mounted
Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdo1 is not mounted
Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdi1 is not mounted
Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdm1 is not mounted
Jun 25 11:13:47 mbstandby sshd(pam_unix)[17583]: session opened for
user root by (uid=0)
Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdd1 is not mounted
Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdh1 is not mounted
Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdg1 is not mounted
Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdc1 is not mounted
Jun 25 11:13:56 mbstandby rgmanager: clurgmgrd startup failed
Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session opened for user
zimbra by (uid=0)
Jun 25 11:13:56 mbstandby zimbra: -bash: /opt/zimbra/log/startup.log:
No such file or directory
Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session closed for user zimbra
Jun 25 11:13:56 mbstandby rc: Starting zimbra: failed
Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdf1 is not mounted
Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdb1 is not mounted
Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sde1 is not mounted
Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sda1 is not mounted
Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Services Initialized
Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Logged in SG "usrm::manager"
Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Magma Event: Membership Change
Jun 25 11:14:09 mbstandby clurgmgrd[15850]: State change: Local UP
Jun 25 11:14:12 mbstandby clurgmgrd[15850]: Magma Event: Membership Change
Jun 25 11:14:12 mbstandby clurgmgrd[15850]: State change: mb1.ku.edu.kw UP
Jun 25 11:14:13 mbstandby clurgmgrd[15850]: Resource groups locked;
not evaluating
Jun 25 11:14:14 mbstandby clurgmgrd[15850]: Magma Event: Membership Change
Jun 25 11:14:14 mbstandby clurgmgrd[15850]: State change: mb2.ku.edu.kw UP
Jun 25 11:49:22 mbstandby sshd(pam_unix)[9438]: session opened for
user root by (uid=0)
I am using e2label to mount on failover as well as primary server.
Attached also is my cluster.conf.
Right now fencing is not being used properly just using manual and was
doing tetsing with HP ILO fencing.
!st query i have is why does it show "Magma Event: Membership Change" ?
Since i have initially defined 3 members in cluster , it should not
give me this . Is it because of some package missing or i have to run
up2date ?
I have installed following packages : -
ccs-1.0.11-1.x86_64.rpm
cman-kernheaders-2.6.9-53.5.x86_64.rpm gulm-1.0.10-0.x86_64.rpm
magma-plugins-1.0.12-0.x86_64.rpm
ccs-devel-1.0.11-1.x86_64.rpm dlm-1.0.7-1.x86_64.rpm
gulm-devel-1.0.10-0.x86_64.rpm
perl-Net-Telnet-3.03-3.noarch.rpm
cman-1.0.17-0.x86_64.rpm dlm-devel-1.0.7-1.x86_64.rpm
iddev-2.0.0-4.x86_64.rpm rgmanager-1.9.72-1.x86_64.rpm
cman-devel-1.0.17-0.x86_64.rpm
dlm-kernel-2.6.9-52.2.x86_64.rpm iddev-devel-2.0.0-4.x86_64.rpm
system-config-cluster-1.0.51-2.0.noarch.rpm
cman-kernel-2.6.9-53.5.x86_64.rpm
dlm-kernel-smp-2.6.9-52.2.x86_64.rpm luci-0.11.0-3.x86_64.rpm
cman-kernel-smp-2.6.9-53.5.x86_64.rpm fence-1.32.50-2.x86_64.rpm
magma-1.0.8-1.x86_64.rpm
Should i be missing any other important package for cluster ? I
installed packages using rpm -ivh *.rpm .
Also i stopped lock_glumd service as i am using lock_dlm lock manager.
Later i tried using just IP in service part w/o mount points and
application service. Then also on reboot it doesnt startup.Here are
the logs :-
Jun 27 19:44:37 mb1 clurgmgrd[12737]: Resource Group Manager Starting
Jun 27 19:44:37 mb1 clurgmgrd[12737]: Loading Service Data
Jun 27 19:44:37 mb1 fstab-sync[12738]: removed all generated mount points
Jun 27 19:44:38 mb1 clurgmgrd[12737]: Initializing Services
Jun 27 19:44:38 mb1 clurgmgrd[12737]: Services Initialized
Jun 27 19:44:38 mb1 clurgmgrd[12737]: Logged in SG "usrm::manager"
Jun 27 19:44:38 mb1 clurgmgrd[12737]: Magma Event: Membership Change
Jun 27 19:44:38 mb1 clurgmgrd[12737]: State change: Local UP
Jun 27 19:44:38 mb1 rgmanager: clurgmgrd startup succeeded
Jun 27 19:44:41 mb1 clurgmgrd[12737]: Magma Event: Membership Change
Jun 27 19:44:41 mb1 clurgmgrd[12737]: State change:
mbstandby.ku.edu.kw UP
Jun 27 19:44:43 mb1 clurgmgrd[12737]: Magma Event: Membership Change
Jun 27 19:44:43 mb1 clurgmgrd[12737]: State change: mb2.ku.edu.kw UP
Attached is also cluster.conf for this
Please guide what could be the issue. Thanks in advance.
Regards,
-Steven
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: cluster-with-IP.txt
URL:
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: cluster-with-service.txt
URL:
From gspiegl at gmx.at Tue Jul 1 10:58:41 2008
From: gspiegl at gmx.at (Gerhard Spiegl)
Date: Tue, 01 Jul 2008 12:58:41 +0200
Subject: [Linux-cluster] takeover, fencing & failback
Message-ID: <486A0DE1.6010609@gmx.at>
Hi all,
I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
XEN virtual machine per node:
node1 => VM1
node2 => VM2
When node1 takes over VM2 via the command:
clusvcadm -M vm:VM2 -m node1
node2 gets fenced after takeover is done, which is probably expected behaviour.
As node2 comes up again it fetches his VM2 back (nofailback="0", but also
fences node1 (ipmilan) where VM1 is still running an therefore interrupted and
restartet on node2.
When node1 comes up the same game in the other direction begins.
Is there a way to avoid this fence loop?
In other words: can a service be migrated from node1 to node2 without other
services that run on node1 being interrupted?
thanks & regards
Gerhard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cluster.conf
Type: text/xml
Size: 2291 bytes
Desc: not available
URL:
From egraeler at commvault.com Tue Jul 1 13:51:21 2008
From: egraeler at commvault.com (Ernie Graeler)
Date: Tue, 1 Jul 2008 09:51:21 -0400
Subject: [Linux-cluster] GFS2 not releasing disk space?
Message-ID: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com>
All,
I'm new to this list, so I'm not sure if any one else has encountered
this problem. Also, this is my first post so forgive me if I do
something incorrect. :-) I've created a cluster using 2 nodes and
created a shared file system between them using gfs2. So far, the set
up seemed to go well, and I can see the file system, and can write to it
and copy files to it with no problem from either node. However, when I
delete or remove files and directories from the gfs2 file system, the
files and directories go away, but the file system does not reclaim the
space from the deleted files. Is there a tunable parameter that handles
this? Or did I miss something in the configuration? Has any one else
encountered this situation? If I restart the cluster, the space comes
back, but I don't want to have to restart the cluster every time I
delete data in order to reclaim the space. I'm running gfs2 on CentOS
5.1 X64. I did a google search but came up dry.
Thanks!
Ernie
Ernst F. Graeler
Systems Analyst/UnixDB Team Supervisor
CommVault Customer Support
Direct: 732.870.4059
Hotline: 877.780.3077
egraeler at commvault.com
******************Legal Disclaimer***************************
"This communication may contain confidential and privileged material
for the sole use of the intended recipient. Any unauthorized review,
use or distribution by others is strictly prohibited. If you have
received the message in error, please advise the sender by reply
email and delete the message. Thank you."
****************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.jpg
Type: image/jpeg
Size: 2327 bytes
Desc: image001.jpg
URL:
From swhiteho at redhat.com Tue Jul 1 13:53:41 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Tue, 01 Jul 2008 14:53:41 +0100
Subject: [Linux-cluster] GFS2 not releasing disk space?
In-Reply-To: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com>
References: <9B27FE59406E9E459ACB375B74453F2C01F82727@USEXCHANGE01.gp.cv.commvault.com>
Message-ID: <1214920422.4011.82.camel@quoit>
Hi,
Thats an ancient version of GFS2, please use something more recent such
as the current Fedora kernel,
Steve.
On Tue, 2008-07-01 at 09:51 -0400, Ernie Graeler wrote:
> All,
>
>
>
> I?m new to this list, so I?m not sure if any one else has encountered
> this problem. Also, this is my first post so forgive me if I do
> something incorrect. J I?ve created a cluster using 2 nodes and
> created a shared file system between them using gfs2. So far, the
> set up seemed to go well, and I can see the file system, and can write
> to it and copy files to it with no problem from either node.
> However, when I delete or remove files and directories from the gfs2
> file system, the files and directories go away, but the file system
> does not reclaim the space from the deleted files. Is there a tunable
> parameter that handles this? Or did I miss something in the
> configuration? Has any one else encountered this situation? If I
> restart the cluster, the space comes back, but I don?t want to have to
> restart the cluster every time I delete data in order to reclaim the
> space. I?m running gfs2 on CentOS 5.1 X64. I did a google search but
> came up dry.
>
>
>
> Thanks!
>
> Ernie
>
>
>
>
>
>
> Ernst F. Graeler
>
>
> Systems Analyst/UnixDB Team
> Supervisor
>
>
> CommVault Customer Support
>
>
> Direct: 732.870.4059
>
>
> Hotline: 877.780.3077
>
>
> egraeler at commvault.com
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ******************Legal Disclaimer***************************
> "This communication may contain confidential and privileged material
> for the sole use of the intended recipient. Any unauthorized review,
> use or distribution by others is strictly prohibited. If you have
> received the message in error, please advise the sender by reply
> email and delete the message. Thank you."
> ****************************************************************
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From jruemker at redhat.com Tue Jul 1 14:42:51 2008
From: jruemker at redhat.com (John Ruemker)
Date: Tue, 01 Jul 2008 10:42:51 -0400
Subject: [Linux-cluster] takeover, fencing & failback
In-Reply-To: <486A0DE1.6010609@gmx.at>
References: <486A0DE1.6010609@gmx.at>
Message-ID: <486A426B.20807@redhat.com>
Gerhard Spiegl wrote:
> Hi all,
>
> I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
> XEN virtual machine per node:
>
> node1 => VM1
> node2 => VM2
>
> When node1 takes over VM2 via the command:
>
> clusvcadm -M vm:VM2 -m node1
>
> node2 gets fenced after takeover is done, which is probably expected behaviour.
>
This is not expected. The vm should migrate and both nodes should
continue running.
> As node2 comes up again it fetches his VM2 back (nofailback="0", but also
> fences node1 (ipmilan) where VM1 is still running an therefore interrupted and
> restartet on node2.
> When node1 comes up the same game in the other direction begins.
> Is there a way to avoid this fence loop?
>
> In other words: can a service be migrated from node1 to node2 without other
> services that run on node1 being interrupted?
>
Are both nodes successfully joined in the cluster? What does 'cman_tool
nodes' say? Can you attach logs showing all of this happening?
John
From l.dardini at comune.prato.it Tue Jul 1 17:13:47 2008
From: l.dardini at comune.prato.it (Leandro Dardini)
Date: Tue, 1 Jul 2008 19:13:47 +0200
Subject: R: [Linux-cluster] Homebrew NAS Cluster
References:
Message-ID: <6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local>
I am running a home-brew NAS Cluster for a medium sized ISP. It is run with a pair of Dell PowerEdge 2900 with 1 Terabyte of filesystem exported via NFS to 4 nodes running apache, exim and imap/pop3 services. Filesystem is made on top of drbd in a active/backup setup with heartbeat. Performance are good, but can be better with more memory on nfs node and faster disks.
I don't know VMware very well, but I run other virtualization solutions, like QEMU. Do you plan to mount the NFS from inside the virtual machine or create a virtual disk on an exported NFS filesystem?
Leandro
-----Messaggio originale-----
Da: linux-cluster-bounces at redhat.com per conto di Stephen Nelson-Smith
Inviato: lun 30/06/2008 23.56
A: linux clustering
Oggetto: [Linux-cluster] Homebrew NAS Cluster
Hi all,
I'm in the process of setting up a virtualisation farm which will have
50-60 virtual machines, running a wide range of web, application and
database applications, all on top of vmware vi3.
My budget won't stretch to a commercial NAS solution, so it's either a
SAN, which could get complicated and hard to manage with so many
nodes, or a home-brew NAS solution.
Has anyone done this, on the list? I'm wondering what the catch is?
I'm thinking all I need to do is run NFS on top of a clustered
filesystem, and export to ESX.
I could use some pointers, gotchas, ideas and experiences.
Thanks!
S.
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 3396 bytes
Desc: not available
URL:
From bkyoung at gmail.com Tue Jul 1 18:16:59 2008
From: bkyoung at gmail.com (Brandon Young)
Date: Tue, 1 Jul 2008 13:16:59 -0500
Subject: R: [Linux-cluster] Homebrew NAS Cluster
In-Reply-To: <6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local>
References:
<6F861500A5092B4C8CD653DE20A4AA0D4D7A12@exchange3.comune.prato.local>
Message-ID: <824ffea00807011116m69c61eb5l4d99318093900a30@mail.gmail.com>
Yeah, similar question to the first responder ... Is your intent to have
shared disk space between all the ESX servers? To support live migrations,
etc? If so, then ESX server has a built-in filesystem called vmfs, which
can be shared by all the servers in the farm to store VM images, etc. We
use it at my place of employment. It's just SAN disk volumes shared by all
the ESX servers.
If you're looking for common storage to be shared and accessed among all the
virtual machines, then an NFS farm might be what you're looking for; maybe
it's unnecessary, though. I have a GFS storage cluster where four machines
export the same data to user land. Actually, I have one server handling all
the user space NFS needs (about 50 clients), and it isn't even breathing
hard. I have two other NFS servers facing an HPC cluster with 300 client
machines. I also have a Samba server serving out all this same data to user
land, too, and it is underchallenged as well, with perhaps 100 clients. So,
depending on how much traffic you would need to sustain, it may not even
require a cluster of NFS servers to achieve your goals. If that's what you
need, though, then a homebrwed NAS solution where the data is stored on a
clustered filesystem is certainly an option worth considering.
2008/7/1 Leandro Dardini :
> I am running a home-brew NAS Cluster for a medium sized ISP. It is run with
> a pair of Dell PowerEdge 2900 with 1 Terabyte of filesystem exported via NFS
> to 4 nodes running apache, exim and imap/pop3 services. Filesystem is made
> on top of drbd in a active/backup setup with heartbeat. Performance are
> good, but can be better with more memory on nfs node and faster disks.
>
> I don't know VMware very well, but I run other virtualization solutions,
> like QEMU. Do you plan to mount the NFS from inside the virtual machine or
> create a virtual disk on an exported NFS filesystem?
>
> Leandro
>
>
> -----Messaggio originale-----
> Da: linux-cluster-bounces at redhat.com per conto di Stephen Nelson-Smith
> Inviato: lun 30/06/2008 23.56
> A: linux clustering
> Oggetto: [Linux-cluster] Homebrew NAS Cluster
>
> Hi all,
>
> I'm in the process of setting up a virtualisation farm which will have
> 50-60 virtual machines, running a wide range of web, application and
> database applications, all on top of vmware vi3.
>
> My budget won't stretch to a commercial NAS solution, so it's either a
> SAN, which could get complicated and hard to manage with so many
> nodes, or a home-brew NAS solution.
>
> Has anyone done this, on the list? I'm wondering what the catch is?
> I'm thinking all I need to do is run NFS on top of a clustered
> filesystem, and export to ESX.
>
> I could use some pointers, gotchas, ideas and experiences.
>
> Thanks!
>
> S.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From andrew at ntsg.umt.edu Tue Jul 1 18:35:39 2008
From: andrew at ntsg.umt.edu (Andrew A. Neuschwander)
Date: Tue, 01 Jul 2008 12:35:39 -0600
Subject: [Linux-cluster] Homebrew NAS Cluster
In-Reply-To:
References:
Message-ID: <486A78FB.9080100@ntsg.umt.edu>
My setup sounds similar to yours but with a SAN for all the underlying
storage.
I have a large FC SAN (might be cost prohibitive for you), and three
physical (Dell PE1500s) servers. Two of them are running ESX 3.5 and one
is running CentOS. The ESX Servers share a chunk of SAN using VMFS3. The
rest of the san is shared by all three physical servers. I have a
handful of virtual CentOS servers to which I've installed the shared SAN
luns via raw device mapping (with the scsi bus' set in physical sharing
mode).
I then put the physical and virtual CentOS machines in one GFS cluster
to share the san (using a custom fence script). While this all works and
is in production, the performance isn't what I'd like. Locking calls by
the virtual centos machines really slow things down, especially when
running samba on a vm. I think it's the nature of GFS being exacerbated
by all the abstraction of ESX. It takes quite a bit of tuning.
The biggest caveat for ESX users is that putting a virtual machine's
scsi bus in physical shared-bus mode, disables DRS and VMotion. You
can't live migrate these machines. The HA feature still works well though.
-A
--
Andrew A. Neuschwander, RHCE
Linux Systems/Software Engineer
College of Forestry and Conservation
The University of Montana
http://www.ntsg.umt.edu
andrew at ntsg.umt.edu - 406.243.6310
Stephen Nelson-Smith wrote:
> Hi all,
>
> I'm in the process of setting up a virtualisation farm which will have
> 50-60 virtual machines, running a wide range of web, application and
> database applications, all on top of vmware vi3.
>
> My budget won't stretch to a commercial NAS solution, so it's either a
> SAN, which could get complicated and hard to manage with so many
> nodes, or a home-brew NAS solution.
>
> Has anyone done this, on the list? I'm wondering what the catch is?
> I'm thinking all I need to do is run NFS on top of a clustered
> filesystem, and export to ESX.
>
> I could use some pointers, gotchas, ideas and experiences.
>
> Thanks!
>
> S.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
From jerlyon at gmail.com Tue Jul 1 18:57:48 2008
From: jerlyon at gmail.com (Jeremy Lyon)
Date: Tue, 1 Jul 2008 12:57:48 -0600
Subject: [Linux-cluster] IP resource behavior
Message-ID: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com>
Hi,
We noticed today that if we manually remove an IP via ip a del /32 dev
bond0 that the service does not detect this and does not cause a fail over.
Shouldn't the service be statusing the IP resource to make sure it is
configured and up? We do have the monitor link option enabled. This is
cluster 2 on RHEL 5.1
TIA
Jeremy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From theophanis_kontogiannis at yahoo.gr Wed Jul 2 06:39:44 2008
From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis)
Date: Wed, 2 Jul 2008 09:39:44 +0300
Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9
In-Reply-To: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com>
References: <05DA6438AEDF5E4B8583C12EBD6C32C0011C2341@mail.strsoftware.com>
Message-ID: <001c01c8dc0e$6baaaee0$43000ca0$@gr>
Hello,
Just a tip. Though obviously I do not know your exact FireWire setup, I
ended up with Centos 5 and kernel 2.6.18-92.1.6.el5.centos.plus were
firewire works perfectly especially for TCP/IP over Ether over Firewire.
Sincerely,
T.K.
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Tina Soles
Sent: Tuesday, July 01, 2008 1:32 AM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Help with Oracle ASMLib 2.0 and Fedora 9
Hello,
I am attempting to setup an Oracle RAC using these instructions:
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi_2.html#1
7
I am running Fedora 9 with kernel = 2.6.25-14.fc9.i686
I realize this is probably an "unsupported" version, but it's the only
version that I could get to work with my firewire setup, so I cannot change
the kernel.
ocfs2 is up and running, and now I need to install ASMLib 2.0, but it
appears that there is no rpm distribution for this kernel. Therefore, I am
attempting to build my own, from the source files, oracleasm-2.0.4.tar.gz.
After unzipping and untarring, I run ./configure and it seems to run fine
(see below), but when I try to run make install it bombs with an error no
rule to make target `oracleasm.ko', needed by `install-oracleasm'. Stop.
I don't have any experience building rpms from source, so any explicit
instructions you can give me would be much appreciated. Also, does this
source file contain everything I need in order to build the kernel driver,
userspace library, and driver support files, or do I need separate source
files for those? Please forgive my ignorance, as I am new to this.
Thanks in advance for any help you can give me.
Regards,
Tina
# ./configure
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking how to run the C preprocessor... gcc -E
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln -s works... yes
checking for ranlib... ranlib
checking for ar... /usr/bin/ar
checking for egrep... grep -E
checking for ANSI C header files... yes
checking for an ANSI C-conforming const... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking for unsigned long... yes
checking size of unsigned long... 4
checking for vendor... not found
checking for vendor kernel... not supported
checking for directory with kernel build tree...
/lib/modules/2.6.25-14.fc9.i686/build
checking for kernel version... 2.6.25-14.fc9.i686
checking for capabilities mask in backing_dev_info... yes
checking for vfsmount in ->get_sb() helpers... yes
checking for for mutex API... yes
checking for for i_private... yes
checking for for i_blksize... no
configure: creating ./config.status
config.status: creating Config.make
config.status: creating include/linux/oracleasm/module_version.h
config.status: creating vendor/sles9/oracleasm.spec-generic
config.status: creating vendor/rhel4/oracleasm.spec-generic
config.status: creating vendor/fc6/oracleasm.spec-generic
config.status: creating vendor/sles10/oracleasm.spec-generic
config.status: creating vendor/rhel5/oracleasm.spec-generic
config.status: creating vendor/common/oracleasm-headers.spec-generic
# make install
make -C include install
make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/include'
make -C linux install
make[2]: Entering directory
`/root/rpms/source/oracleasm-2.0.4/include/linux'
make -C oracleasm install
make[3]: Entering directory
`/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm'
/bin/sh ../../../mkinstalldirs /usr/local/include/linux/oracleasm
for hdr in abi.h abi_compat.h disk.h error.h manager.h manager_compat.h
kernel.h compat32.h module_version.h; do \
/usr/bin/install -c -m 644 $hdr
/usr/local/include/linux/oracleasm/$hdr; \
done
make[3]: Leaving directory
`/root/rpms/source/oracleasm-2.0.4/include/linux/oracleasm'
make[2]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include/linux'
make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/include'
make -C kernel install
make[1]: Entering directory `/root/rpms/source/oracleasm-2.0.4/kernel'
make[1]: *** No rule to make target `oracleasm.ko', needed by
`install-oracleasm'. Stop.
make[1]: Leaving directory `/root/rpms/source/oracleasm-2.0.4/kernel'
make: *** [kernel-install] Error 2
Tina Soles
Senior Analyst
STR Software
11505 Allecingie Parkway
Richmond, VA 23235
email. tina.soles at strsoftware.com
phone. 804.897.1600
fax. 804.897.1638
web. www.strsoftware.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 3308 bytes
Desc: not available
URL:
From theophanis_kontogiannis at yahoo.gr Wed Jul 2 10:20:43 2008
From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis)
Date: Wed, 2 Jul 2008 13:20:43 +0300
Subject: [Linux-cluster] Problem with GFS2 - Kernel Panic - Can NOT erase
directory
In-Reply-To: <008201c8dac0$e0aae010$a200a030$@gr>
References: <008201c8dac0$e0aae010$a200a030$@gr>
Message-ID: <003401c8dc2d$4a7c0560$df741020$@gr>
Hello again,
Becoming queries why only once service fails, I tried to encircle the root
cause.
I ended up that files in only one directory (were the failing service keeps
its files), are corrupted.
Trying to ls -l in the directory gives the following output:
ls: reading directory .: Input/output error
total 192
?--------- ? ? ? ? ?
account_boinc.bakerlab.org_rosetta.xml
?--------- ? ? ? ? ?
account_climateprediction.net.xml
?--------- ? ? ? ? ?
account_predictor.chem.lsa.umich.edu.xml
?--------- ? ? ? ? ? all_projects_list.xml
-rw-r--r-- 1 boinc boinc 159796 Jun 22 22:47 client_state_prev.xml
?--------- ? ? ? ? ? client_state.xml
-rw-r--r-- 1 boinc boinc 5141 Jun 13 23:21 get_current_version.xml
?--------- ? ? ? ? ? get_project_config.xml
-rw-r--r-- 1 boinc boinc 899 Apr 4 17:06 global_prefs.xml
?--------- ? ? ? ? ? gui_rpc_auth.cfg
?--------- ? ? ? ? ?
job_log_boinc.bakerlab.org_rosetta.txt
?--------- ? ? ? ? ?
job_log_predictor.chem.lsa.umich.edu.txt
?--------- ? ? ? ? ? lockfile
?--------- ? ? ? ? ? lookup_account.xml
?--------- ? ? ? ? ? lookup_website.html
?--------- ? ? ? ? ?
master_boinc.bakerlab.org_rosetta.xml
?--------- ? ? ? ? ?
master_climateprediction.net.xml
?--------- ? ? ? ? ?
master_predictor.chem.lsa.umich.edu.xml
?--------- ? ? ? ? ? projects
?--------- ? ? ? ? ?
sched_reply_boinc.bakerlab.org_rosetta.xml
?--------- ? ? ? ? ?
sched_reply_climateprediction.net.xml
?--------- ? ? ? ? ?
sched_reply_predictor.chem.lsa.umich.edu.xml
?--------- ? ? ? ? ?
sched_request_boinc.bakerlab.org_rosetta.xml
-rw-r--r-- 1 boinc boinc 6766 Jun 22 21:27
sched_request_climateprediction.net.xml
?--------- ? ? ? ? ?
sched_request_predictor.chem.lsa.umich.edu.xml
?--------- ? ? ? ? ? slots
?--------- ? ? ? ? ?
statistics_boinc.bakerlab.org_rosetta.xml
?--------- ? ? ? ? ?
statistics_climateprediction.net.xml
?--------- ? ? ? ? ?
statistics_predictor.chem.lsa.umich.edu.xml
?--------- ? ? ? ? ? stderrdae.txt
?--------- ? ? ? ? ? stdoutdae.txt
?--------- ? ? ? ? ? time_stats_log
At the same moment the kernel reports what is following below (attached the
previous e-mail).
Trying to rm -rf the directory fails with the same kernel message.
Any ideas on how to erase the problematic directory?
Also the other node (the one on which I do not try to make any actions on
the file system in question, gives the following message:
GFS2: fsid=tweety:gfs2-00.0: jid=1: Trying to acquire journal lock...
GFS2: fsid=tweety:gfs2-00.0: jid=1: Busy
And the file system becomes inaccessible forever. Any one knows why is that?
Thank you all for your time
T. Kontogiannis
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Theophanis
Kontogiannis
Sent: Monday, June 30, 2008 5:52 PM
To: 'linux clustering'
Subject: [Linux-cluster] Problem with GFS2 - Kernel Panic
Hello all,
I have a two node cluster with DRBD running in Primary/Primary.
Both nodes are running:
? Kernel 2.6.18-92.1.6.el5.centos.plus
? GFS2 fsck 0.1.44
? cman_tool 2.0.84
? Cluster LVM daemon version: 2.02.32-RHEL5 (2008-03-04)
Protocol version: 0.2.1
? DRBD Version: 8.2.6 (api:88)
After a corruption (which was the result of combining updating and rebooting
with the FS mounted, network interruption during the reboot and like issues,
I keep on getting the following on one node:
Jun 30 00:13:40 tweety1 clurgmgrd[5283]: stop on script "BOINC"
returned 1 (generic error)
Jun 30 00:13:40 tweety1 clurgmgrd[5283]: Services Initialized
Jun 30 00:13:40 tweety1 clurgmgrd[5283]: State change: Local UP
Jun 30 00:13:45 tweety1 clurgmgrd[5283]: Starting stopped service
service:BOINC-t1
Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: fatal: invalid
metadata block
Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: bh = 21879736
(magic number)
Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: function =
gfs2_meta_indirect_buffer, file = fs/gfs2/meta_io.c, line = 332
Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: about to
withdraw this file system
Jun 30 00:13:45 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: telling LM to
withdraw
Jun 30 00:13:46 tweety1 clurgmgrd[5283]: Service service:BOINC-t1
started
Jun 30 00:13:46 tweety1 kernel: GFS2: fsid=tweety:gfs2-00.0: withdrawn
Jun 30 00:13:46 tweety1 kernel:
Jun 30 00:13:46 tweety1 kernel: Call Trace:
Jun 30 00:13:46 tweety1 kernel: []
:gfs2:gfs2_lm_withdraw+0xc1/0xd0
Jun 30 00:13:46 tweety1 kernel: []
__wait_on_bit+0x60/0x6e
Jun 30 00:13:46 tweety1 kernel: [] sync_buffer+0x0/0x3f
Jun 30 00:13:46 tweety1 kernel: []
out_of_line_wait_on_bit+0x6c/0x78
Jun 30 00:13:46 tweety1 kernel: []
wake_bit_function+0x0/0x23
Jun 30 00:13:46 tweety1 kernel: []
:gfs2:gfs2_meta_check_ii+0x2c/0x38
Jun 30 00:13:46 tweety1 kernel: []
:gfs2:gfs2_meta_indirect_buffer+0x104/0x15e
Jun 30 00:13:46 tweety1 kernel: []
:gfs2:gfs2_inode_refresh+0x22/0x2ca
Jun 30 00:13:46 tweety1 kernel: []
wake_bit_function+0x0/0x23
Jun 30 00:13:46 tweety1 kernel: []
:gfs2:inode_go_lock+0x29/0x57
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:glock_wait_internal+0x1d4/0x23f
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:gfs2_glock_nq+0x1ae/0x1d4
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:gfs2_lookup+0x58/0xa7
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:gfs2_lookup+0x50/0xa7
Jun 30 00:13:47 tweety1 kernel: [] d_alloc+0x174/0x1a9
Jun 30 00:13:47 tweety1 kernel: [] do_lookup+0xd3/0x1d4
Jun 30 00:13:47 tweety1 kernel: []
__link_path_walk+0xa01/0xf42
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:compare_dents+0x0/0x57
Jun 30 00:13:47 tweety1 kernel: []
link_path_walk+0x5c/0xe5
Jun 30 00:13:47 tweety1 kernel: []
:gfs2:gfs2_glock_put+0x26/0x133
After that, the machine freezes completely. The only way to recover is to
power-cycle / reset.
"gfs2-fsck -vy /dev/mapper/vg0-data0" ends (not terminates, it just look
like it finishes) with:
Pass5 complete
Writing changes to disk
gfs2_fsck: buffer still held for block: 21875415 (0x14dcad7)
After remounting the file system and having a service start (that has its
files on this gfs2 filesystem), the kernel again crasses with the same
message and the node freezes up.
Unfortunately due to bad handling, I failed to DRBD invalidate the
problematic node, and instead of making it sync target (which theoretically
would solve the problem, since the good node, would sync the bad node).
Instead I made the bad node, sync source and now both nodes have the same
issue L
Any ideas of how can I resolve this issue?
Sincerely,
Theophanis Kontogiannis
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gspiegl at gmx.at Wed Jul 2 13:51:05 2008
From: gspiegl at gmx.at (Gerhard Spiegl)
Date: Wed, 02 Jul 2008 15:51:05 +0200
Subject: [Linux-cluster] takeover, fencing & failback
In-Reply-To: <486A426B.20807@redhat.com>
References: <486A0DE1.6010609@gmx.at> <486A426B.20807@redhat.com>
Message-ID: <486B87C9.8040909@gmx.at>
John Ruemker wrote:
> Gerhard Spiegl wrote:
>> Hi all,
>>
>> I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
>> XEN virtual machine per node:
>>
>> node1 => VM1
>> node2 => VM2
>>
>> When node1 takes over VM2 via the command:
>>
>> clusvcadm -M vm:VM2 -m node1
>>
>> node2 gets fenced after takeover is done, which is probably expected
>> behaviour.
>>
>
> This is not expected. The vm should migrate and both nodes should
> continue running.
>
>> As node2 comes up again it fetches his VM2 back (nofailback="0", but also
>> fences node1 (ipmilan) where VM1 is still running an therefore
>> interrupted and restartet on node2.
>> When node1 comes up the same game in the other direction begins.
>> Is there a way to avoid this fence loop?
>>
>> In other words: can a service be migrated from node1 to node2 without
>> other
>> services that run on node1 being interrupted?
>>
>
> Are both nodes successfully joined in the cluster? What does 'cman_tool
> nodes' say? Can you attach logs showing all of this happening?
>
Hi,
cman_tool node before an during the migration:
[root at ols011p ~]# cman_tool nodes
Node Sts Inc Joined Name
0 M 0 2008-07-02 12:50:31 /dev/mapper/HDS-00F9p2
1 M 1228 2008-07-02 12:50:19 ols011p.ops.ctbto.org
2 M 1232 2008-07-02 12:50:19 ols012p.ops.ctbto.org
[root at ols012p ~]# cman_tool nodes
Node Sts Inc Joined Name
0 M 0 2008-07-02 12:50:32 /dev/mapper/HDS-00F9p2
1 M 1232 2008-07-02 12:50:19 ols011p.ops.ctbto.org
2 M 1224 2008-07-02 12:49:51 ols012p.ops.ctbto.org
everything seems fine.
The logs are attached in seperate files.
thanks
Gerhard
>
> John
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: OLS011_log
URL:
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: OLS012_log
URL:
From jbrassow at redhat.com Wed Jul 2 14:51:43 2008
From: jbrassow at redhat.com (Jonathan Brassow)
Date: Wed, 2 Jul 2008 09:51:43 -0500
Subject: [Linux-cluster] Cluster doesn't come up while rebooting
In-Reply-To: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com>
References: <56bb44d0807010206y220c2947rbb71a656d38b1afa@mail.gmail.com>
Message-ID:
I wouldn't worry about the "Magma Event: Membership Change" messages.
I think that get printed out whenever a machine joins or leaves the
cluster. (You have to be part of the cluster to see the changes...
which is why everyone sees local change first, followed by whoever
comes after them.) Do you have syslog set to print out 'debug'? That
may explain some of these messages...
Just to get this straight, after all machines are up, if you use
'clusvcadm' to start the services, it works? If you reboot all
machines, it doesn't work on bootup? What if you just reboot one
machine?
Someone will have to confirm my next few statements, but this is what
I think is happening... rgmanager does a 'stop' when a machine comes
up. I'm guessing this is why you are seeing the "is not mounted" and
other messages. In your cluster.conf, you have the services set to
'autostart="0"', which means they will not start by default(?). So,
you need to start by hand when the machines come up. Potential
solution is to ignore the messages you've attached (or figure out why
syslog is being so verbose), and take out the 'autostart="0"' from
cluster.conf.
brassow
On Jul 1, 2008, at 4:06 AM, Stevan Colaco wrote:
> Hello All,
>
> I need your help for one issue i am facing .
>
> OS: RHEL4 ES Update 6 64bit
>
> I have a deployment where we have 2 + 1 cluster (2 active and one
> passive). I have a service which is to be failed over but faced issues
> when i rebooted all 3 servers. Services got disabled. But when i use
> clusvsadm to manually enable service it works. Here are the logs : -
>
> Jun 25 11:13:15 mb1 clurgmgrd[14825]: Resource Group Manager Starting
> Jun 25 11:13:15 mb1 clurgmgrd[14825]: Loading Service Data
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: Initializing Services
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: /dev/sdh1 is not mounted
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-BACKUP with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-BACKUP returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-STORE with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-STORE returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-DBDATA with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-DBDATA returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-CONF with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CONF returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-REDOLOG with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-REDOLOG returned
> 2 (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-INDEX with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-INDEX returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-LOG with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-LOG returned 2
> (invalid argument(s))
> Jun 25 11:13:17 mb1 clurgmgrd: [14825]: stop: Could not match
> LABEL=MB2-ZIMBRA-CLUST with a real device
> Jun 25 11:13:17 mb1 clurgmgrd[14825]: stop on fs:MB2-CLUSTER returned
> 2 (invalid argument(s))
> Jun 25 11:13:22 mb1 clurgmgrd: [14825]: /dev/sdg1 is not mounted
> Jun 25 11:13:27 mb1 clurgmgrd: [14825]: /dev/sdf1 is not mounted
> Jun 25 11:13:33 mb1 clurgmgrd: [14825]: /dev/sde1 is not mounted
> Jun 25 11:13:38 mb1 clurgmgrd: [14825]: /dev/sdd1 is not mounted
> Jun 25 11:13:43 mb1 clurgmgrd: [14825]: /dev/sdc1 is not mounted
> Jun 25 11:13:45 mb1 rgmanager: clurgmgrd startup failed
> Jun 25 11:13:48 mb1 clurgmgrd: [14825]: /dev/sdb1 is not mounted
> Jun 25 11:13:53 mb1 clurgmgrd: [14825]: /dev/sda1 is not mounted
> Jun 25 11:13:58 mb1 clurgmgrd[14825]: Services Initialized
> Jun 25 11:14:01 mb1 clurgmgrd[14825]: Logged in SG "usrm::manager"
> Jun 25 11:14:01 mb1 clurgmgrd[14825]: Magma Event: Membership Change
> Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change: Local UP
> Jun 25 11:14:01 mb1 clurgmgrd[14825]: State change:
> mbstandby.ku.edu.kw UP
> Jun 25 11:14:03 mb1 clurgmgrd[14825]: Magma Event: Membership Change
> Jun 25 11:14:03 mb1 clurgmgrd[14825]: State change: mb2.ku.edu.kw UP
>
>
> MB2 server Logs
>
> Jun 25 11:13:40 mb2 clurgmgrd[14776]: Resource Group Manager Starting
> Jun 25 11:13:40 mb2 clurgmgrd[14776]: Loading Service Data
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: Initializing Services
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-DBDATA with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-DBDATA returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-INDEX with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-INDEX returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-LOG with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-LOG returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-CONF with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CONF returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: /dev/sdh1 is not mounted
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-BACKUP with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-BACKUP returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-REDOLOG with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-REDOLOG returned
> 2 (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-STORE with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-STORE returned 2
> (invalid argument(s))
> Jun 25 11:13:41 mb2 clurgmgrd: [14776]: stop: Could not match
> LABEL=MB1-ZIMBRA-CLUST with a real device
> Jun 25 11:13:41 mb2 clurgmgrd[14776]: stop on fs:MB1-CLUSTER returned
> 2 (invalid argument(s))
> Jun 25 11:13:46 mb2 clurgmgrd: [14776]: /dev/sdf1 is not mounted
> Jun 25 11:13:52 mb2 clurgmgrd: [14776]: /dev/sdg1 is not mounted
> Jun 25 11:13:57 mb2 clurgmgrd: [14776]: /dev/sde1 is not mounted
> Jun 25 11:14:02 mb2 clurgmgrd: [14776]: /dev/sdd1 is not mounted
> Jun 25 11:14:07 mb2 clurgmgrd: [14776]: /dev/sdc1 is not mounted
> Jun 25 11:14:10 mb2 rgmanager: clurgmgrd startup failed
> Jun 25 11:14:12 mb2 clurgmgrd: [14776]: /dev/sdb1 is not mounted
> Jun 25 11:14:18 mb2 clurgmgrd: [14776]: /dev/sda1 is not mounted
> Jun 25 11:14:23 mb2 clurgmgrd[14776]: Services Initialized
> Jun 25 11:14:25 mb2 clurgmgrd[14776]: Logged in SG "usrm::manager"
> Jun 25 11:14:25 mb2 clurgmgrd[14776]: Magma Event: Membership Change
> Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: Local UP
> Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change: mb1.ku.edu.kw UP
> Jun 25 11:14:25 mb2 clurgmgrd[14776]: State change:
> mbstandby.ku.edu.kw UP
>
> MBSTANDBY LOGS
>
> Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Resource Group Manager
> Starting
> Jun 25 11:13:26 mbstandby clurgmgrd[15850]: Loading Service Data
> Jun 25 11:13:27 mbstandby clurgmgrd[15850]: Initializing Services
> Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdl1 is not mounted
> Jun 25 11:13:27 mbstandby clurgmgrd: [15850]: /dev/sdp1 is not mounted
> Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdk1 is not mounted
> Jun 25 11:13:32 mbstandby clurgmgrd: [15850]: /dev/sdn1 is not mounted
> Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdj1 is not mounted
> Jun 25 11:13:38 mbstandby clurgmgrd: [15850]: /dev/sdo1 is not mounted
> Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdi1 is not mounted
> Jun 25 11:13:43 mbstandby clurgmgrd: [15850]: /dev/sdm1 is not mounted
> Jun 25 11:13:47 mbstandby sshd(pam_unix)[17583]: session opened for
> user root by (uid=0)
> Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdd1 is not mounted
> Jun 25 11:13:48 mbstandby clurgmgrd: [15850]: /dev/sdh1 is not mounted
> Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdg1 is not mounted
> Jun 25 11:13:53 mbstandby clurgmgrd: [15850]: /dev/sdc1 is not mounted
> Jun 25 11:13:56 mbstandby rgmanager: clurgmgrd startup failed
> Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session opened for user
> zimbra by (uid=0)
> Jun 25 11:13:56 mbstandby zimbra: -bash: /opt/zimbra/log/startup.log:
> No such file or directory
> Jun 25 11:13:56 mbstandby su(pam_unix)[18378]: session closed for
> user zimbra
> Jun 25 11:13:56 mbstandby rc: Starting zimbra: failed
> Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdf1 is not mounted
> Jun 25 11:13:58 mbstandby clurgmgrd: [15850]: /dev/sdb1 is not mounted
> Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sde1 is not mounted
> Jun 25 11:14:04 mbstandby clurgmgrd: [15850]: /dev/sda1 is not mounted
> Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Services Initialized
> Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Logged in SG
> "usrm::manager"
> Jun 25 11:14:09 mbstandby clurgmgrd[15850]: Magma Event: Membership
> Change
> Jun 25 11:14:09 mbstandby clurgmgrd[15850]: State change: Local UP
> Jun 25 11:14:12 mbstandby clurgmgrd[15850]: Magma Event: Membership
> Change
> Jun 25 11:14:12 mbstandby clurgmgrd[15850]: State change:
> mb1.ku.edu.kw UP
> Jun 25 11:14:13 mbstandby clurgmgrd[15850]: Resource groups locked;
> not evaluating
> Jun 25 11:14:14 mbstandby clurgmgrd[15850]: Magma Event: Membership
> Change
> Jun 25 11:14:14 mbstandby clurgmgrd[15850]: State change:
> mb2.ku.edu.kw UP
> Jun 25 11:49:22 mbstandby sshd(pam_unix)[9438]: session opened for
> user root by (uid=0)
>
> I am using e2label to mount on failover as well as primary server.
> Attached also is my cluster.conf.
>
> Right now fencing is not being used properly just using manual and was
> doing tetsing with HP ILO fencing.
>
> !st query i have is why does it show "Magma Event: Membership
> Change" ?
>
> Since i have initially defined 3 members in cluster , it should not
> give me this . Is it because of some package missing or i have to run
> up2date ?
>
> I have installed following packages : -
>
> ccs-1.0.11-1.x86_64.rpm
> cman-kernheaders-2.6.9-53.5.x86_64.rpm gulm-1.0.10-0.x86_64.rpm
> magma-plugins-1.0.12-0.x86_64.rpm
> ccs-devel-1.0.11-1.x86_64.rpm dlm-1.0.7-1.x86_64.rpm
> gulm-devel-1.0.10-0.x86_64.rpm
> perl-Net-Telnet-3.03-3.noarch.rpm
> cman-1.0.17-0.x86_64.rpm dlm-devel-1.0.7-1.x86_64.rpm
> iddev-2.0.0-4.x86_64.rpm rgmanager-1.9.72-1.x86_64.rpm
> cman-devel-1.0.17-0.x86_64.rpm
> dlm-kernel-2.6.9-52.2.x86_64.rpm iddev-devel-2.0.0-4.x86_64.rpm
> system-config-cluster-1.0.51-2.0.noarch.rpm
> cman-kernel-2.6.9-53.5.x86_64.rpm
> dlm-kernel-smp-2.6.9-52.2.x86_64.rpm luci-0.11.0-3.x86_64.rpm
> cman-kernel-smp-2.6.9-53.5.x86_64.rpm fence-1.32.50-2.x86_64.rpm
> magma-1.0.8-1.x86_64.rpm
>
> Should i be missing any other important package for cluster ? I
> installed packages using rpm -ivh *.rpm .
> Also i stopped lock_glumd service as i am using lock_dlm lock manager.
>
> Later i tried using just IP in service part w/o mount points and
> application service. Then also on reboot it doesnt startup.Here are
> the logs :-
>
> Jun 27 19:44:37 mb1 clurgmgrd[12737]: Resource Group
> Manager Starting
> Jun 27 19:44:37 mb1 clurgmgrd[12737]: Loading Service Data
> Jun 27 19:44:37 mb1 fstab-sync[12738]: removed all generated mount
> points
> Jun 27 19:44:38 mb1 clurgmgrd[12737]: Initializing Services
> Jun 27 19:44:38 mb1 clurgmgrd[12737]: Services Initialized
> Jun 27 19:44:38 mb1 clurgmgrd[12737]: Logged in SG
> "usrm::manager"
> Jun 27 19:44:38 mb1 clurgmgrd[12737]: Magma Event: Membership
> Change
> Jun 27 19:44:38 mb1 clurgmgrd[12737]: State change: Local UP
> Jun 27 19:44:38 mb1 rgmanager: clurgmgrd startup succeeded
> Jun 27 19:44:41 mb1 clurgmgrd[12737]: Magma Event: Membership
> Change
> Jun 27 19:44:41 mb1 clurgmgrd[12737]: State change:
> mbstandby.ku.edu.kw UP
> Jun 27 19:44:43 mb1 clurgmgrd[12737]: Magma Event: Membership
> Change
> Jun 27 19:44:43 mb1 clurgmgrd[12737]: State change:
> mb2.ku.edu.kw UP
>
> Attached is also cluster.conf for this
>
> Please guide what could be the issue. Thanks in advance.
>
> Regards,
> -Steven
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From lhh at redhat.com Wed Jul 2 18:29:08 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:29:08 -0400
Subject: [Linux-cluster] IP resource behavior
In-Reply-To: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com>
References: <779919740807011157qec9f5a9m965523ef4ebe5631@mail.gmail.com>
Message-ID: <1215023348.23062.6.camel@localhost.localdomain>
On Tue, 2008-07-01 at 12:57 -0600, Jeremy Lyon wrote:
> Hi,
>
> We noticed today that if we manually remove an IP via ip a del /32
> dev bond0 that the service does not detect this and does not cause a
> fail over. Shouldn't the service be statusing the IP resource to make
> sure it is configured and up? We do have the monitor link option
> enabled. This is cluster 2 on RHEL 5.1
Yes, it should have detected it. However, there's a bug in the stable2
branch which could cause it to fail in your case, particularly if your
IP ends in say .25
-- Lon
From lhh at redhat.com Wed Jul 2 18:34:32 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:34:32 -0400
Subject: [Linux-cluster] takeover, fencing & failback
In-Reply-To: <486A0DE1.6010609@gmx.at>
References: <486A0DE1.6010609@gmx.at>
Message-ID: <1215023672.23062.10.camel@localhost.localdomain>
On Tue, 2008-07-01 at 12:58 +0200, Gerhard Spiegl wrote:
> Hi all,
>
> I'm working on a two node cluster (RHEL 5.2 + RHCS) with one
> XEN virtual machine per node:
>
> node1 => VM1
> node2 => VM2
>
> When node1 takes over VM2 via the command:
>
> clusvcadm -M vm:VM2 -m node1
>
> node2 gets fenced after takeover is done, which is probably expected behaviour.
No, it's not.
> As node2 comes up again it fetches his VM2 back (nofailback="0", but also
> fences node1 (ipmilan) where VM1 is still running an therefore interrupted and
> restartet on node2.
Neither is this. Fetching the VM back certainly should require
fencing...
> When node1 comes up the same game in the other direction begins.
> Is there a way to avoid this fence loop?
> In other words: can a service be migrated from node1 to node2 without other
> services that run on node1 being interrupted?
We'll need more details in order to figure out what's going on; such as
cluster.conf and your network topology (switch make/model, what speed
are your network links, etc)
-- Lon
From lhh at redhat.com Wed Jul 2 18:36:28 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:36:28 -0400
Subject: [Linux-cluster] CS5 / IP failover with bond interface ?
In-Reply-To: <486354AB.4050307@bull.net>
References: <486354AB.4050307@bull.net>
Message-ID: <1215023788.23062.13.camel@localhost.localdomain>
On Thu, 2008-06-26 at 10:34 +0200, Alain Moulle wrote:
> Hi
>
> Is it supported to use IP bonded adress as IP to
> be failovered via the CS5 ?
It should be, but you must have a bonded address configured first - we
do not manage setting up/taking down bonded interfaces. Rgmanager
should assign IP addresses to "bondX" when appropriate.
-- Lon
From lhh at redhat.com Wed Jul 2 18:40:50 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:40:50 -0400
Subject: [Linux-cluster] Re: CS5 / quorum disk and heuristics / about
allow_kill and/or reboot
In-Reply-To: <4863559E.9030200@bull.net>
References: <4863559E.9030200@bull.net>
Message-ID: <1215024050.23062.17.camel@localhost.localdomain>
On Thu, 2008-06-26 at 10:38 +0200, Alain Moulle wrote:
> Hi Lon
>
> and so ... ? ;-)
Right. Heartbeat fails + allow_kill = 0 -> qdiskd doesn't help prevent
fence race.
reboot = 0 shouldn't matter because the the node which has a correct
heuristic score will win.
-- Lon
>
> Regards
> Alain Moull?
>
>
> Date: Tue, 10 Jun 2008 14:37:19 -0400
> From: Lon Hohberger
> >>Hi Lon,
> >>> Whereas heart-beat interface was working fine.
> >>> You can disable these by setting allow_kill="0" and/or reboot="0"
> >>> (see qdisk(5)).
> >>
> >>
> >> => ok but in the case of a heart-beat failure, it will no more
> >> avoid the dual-fencing in a two-nodes cluster if allow_kill="0" and/or
> reboot="0" , right ?
>
> >I'd have to think about it.
> >Lon
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From lhh at redhat.com Wed Jul 2 18:42:12 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:42:12 -0400
Subject: [Linux-cluster] virtual machine failover with gfs
In-Reply-To:
References:
Message-ID: <1215024132.23062.20.camel@localhost.localdomain>
On Wed, 2008-06-25 at 15:51 -0700, matt whiteley wrote:
> I have spent lots of hours trying different setups and reading the
> documentation already so I hope this isn't a faq as I am new to the
> list.
>
> I read the Red Hat Magazine article on this topic[1], but have come to
> realize that it might not be exactly what I am going for. I want to
> have a group of nodes that run a group of virtual machines with
> automated failover. I set things up how the article described but
> realized I didn't want the gfs mount in the fstab file. I would like
> the gfs mount described in the cluster.conf file so that as nodes are
> added or removed the mount will follow the changes (I know about the 1
> journal per node so have created a few extra already). When I add a
> service to mount the gfs resource, it only gets mounted on one node as
> is to be expected thinking in terms of other resources.
> I started thinking about this and it almost seems like gfs is
> unnecessary. Should I have a file system per virtual machine that
> wouldn't need to be gfs since only one node will ever run a virtual
> machine at a time? Then mount/umount the file system as the virtual
> machine was migrated in the cluster?
If you assign a raw SAN Lun to each virtual machine, you don't need GFS.
I would not bother making an EXT3 or other local file system and placing
a single VM image on it; it's not terribly practical.
-- Lon
>
> It seems like I am missing something about how this should be setup
> and I would really appreciate any tips or ideas. I will include my
> cluster.conf in case it provides any more info.
>
> As a side note, what is with all the errors from system-config-
> kickstart telling me my config file is invalid if it was generated by
> conga. Both versions are updated to the newest available.
>
>
>
> [1] http://www.redhatmagazine.com/2007/08/23/automated-failover-and-recovery-of-virtualized-guests-in-advanced-platform/
>
> thanks,
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From lhh at redhat.com Wed Jul 2 18:43:47 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:43:47 -0400
Subject: [Linux-cluster] Lost token - every 5 minutes: [TOTEM] The
token was lost. Samba process possible cause?
In-Reply-To: <6008E5CED89FD44A86D3C376519E1DB2010347BB42@megatron.ms.a2end.com>
References: <6008E5CED89FD44A86D3C376519E1DB2010347BB42@megatron.ms.a2end.com>
Message-ID: <1215024227.23062.23.camel@localhost.localdomain>
Hi,
This sounds like something that someone on the openais would know. I've
CC'd the openais list.
-- Lon
On Fri, 2008-06-27 at 16:03 +1000, Bevan Broun wrote:
> Hi All
>
> I have a 2 node RHEL-5.1 cluster. A quorum disk is configured.
> The hosts have 4 NICs. These are bonded:
> (eth0+eth2) -> bond0
> (eth1+eth3) -> bond1
> Unfortunately I was not able to use a dedicated interface for cluster communications - bond1 is being used. This is where I think Im in trouble.
>
> The cluster has been configured using IP addressess. I did have to use http://archives.free.net.ph/message/20080130.074958.5c7a211c.en.html
> as the hostname is related to the bond0 IP.
>
> I have not defined the interface to be used by the cluster, just relying on the IP address configured.
> The cluster's purpose is 2 GFS file systems.
>
> The cluster was configured and working for 4 days before there was problems.
>
> I now have almost constant lost of token message in /var/log/message. They are almost exactly 5 minutes apart. A typical bit of messages file is show below my sig.
>
> Just before the problem started a samba message shows nmdb becomming local master browser for a work group on the interface used for cluster communications.
>
> Jun 20 13:39:27 HOST1 nmbd[24506]: [2008/06/20 13:39:27, 0] nmbd/nmbd_become_lmb.c:become_loca
> l_master_stage2(396)
> Jun 20 13:39:27 HOST1 nmbd[24506]: *****
> Jun 20 13:39:27 HOST1 nmbd[24506]:
> Jun 20 13:39:27 HOST1 nmbd[24506]: Samba name server NBM1 is now a local master browser for
> workgroup SMS_DOMAIN on subnet 162.16.96.229
> Jun 20 13:39:27 HOST1 nmbd[24506]:
> Jun 20 13:39:27 HOST1 nmbd[24506]: *****
> Jun 20 13:43:27 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state.
>
> "cman_tool status" shows both nodes and looks normal. Looks like clmvd is not happy, df commands are hanging.
>
> Could nmdb be causing this token loss? Any ideas on how to proceed?
>
> (names and IPs have been changed).
>
> Thanks
>
> Bevan Broun
> Solutions Architect
> Ardec International
> http://www.ardec.com.au
> http://www.lisasoft.com
> http://www.terrapages.com
> Sydney
> -----------------------
> Suite 112,The Lower Deck
> 19-21 Jones Bay Wharf
> Pirrama Road, Pyrmont 2009
> Ph: +61 2 8570 5000
> Fax: +61 2 8570 5099
>
>
>
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state.
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Receive multicast socket recv buffer size (28800
> 0 bytes).
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Transmit multicast socket send buffer size (2621
> 42 bytes).
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] entering GATHER state from 2.
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep.
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Saving state aru 16 high seq received 16
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce34
> Jun 20 13:48:31 HOST1 openais[15265]: [TOTEM] entering COMMIT state.
> Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state.
> Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4.
> Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep.
> Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce38
> Jun 20 13:48:41 HOST1 openais[15265]: [TOTEM] entering COMMIT state.
> Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state.
> Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4.
> Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep.
> Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce3c
> Jun 20 13:48:51 HOST1 openais[15265]: [TOTEM] entering COMMIT state.
> Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] The token was lost in the COMMIT state.
> Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] entering GATHER state from 4.
> Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] Creating commit token because I am the rep.
> Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] Storing new sequence id for ring 20ce40
> Jun 20 13:49:01 HOST1 openais[15265]: [TOTEM] entering COMMIT state.
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] entering RECOVERY state.
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] position [0] member 162.16.96.229:
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] previous ring seq 2149936 rep 162.16.96.229
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] aru 16 high delivered 16 received flag 1
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] position [1] member 162.16.96.230:
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] previous ring seq 2149936 rep 162.16.96.229
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] aru 16 high delivered 16 received flag 1
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] Did not need to originate any messages in recove
> ry.
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] Sending initial ORF token
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] CLM CONFIGURATION CHANGE
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] New Configuration:
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.229)
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.230)
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Left:
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Joined:
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] CLM CONFIGURATION CHANGE
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] New Configuration:
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.229)
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] r(0) ip(162.16.96.230)
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Left:
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] Members Joined:
> Jun 20 13:49:06 HOST1 openais[15265]: [SYNC ] This node is within the primary component and wi
> ll provide service.
> Jun 20 13:49:06 HOST1 openais[15265]: [TOTEM] entering OPERATIONAL state.
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] got nodejoin message 162.16.96.229
> Jun 20 13:49:06 HOST1 openais[15265]: [CLM ] got nodejoin message 162.16.96.230
> Jun 20 13:49:06 HOST1 openais[15265]: [CPG ] got joinlist message from node 2
> Jun 20 13:49:06 HOST1 openais[15265]: [CPG ] got joinlist message from node 1
> Jun 20 13:53:38 HOST1 openais[15265]: [TOTEM] The token was lost in the OPERATIONAL state.
>
> The contents of this email are confidential and may be subject to legal or professional privilege and copyright. No representation is made that this email is free of viruses or other defects. If you have received this communication in error, you may not copy or distribute any part of it or otherwise disclose its contents to anyone. Please advise the sender of your incorrect receipt of this correspondence.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From lhh at redhat.com Wed Jul 2 18:48:22 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Wed, 02 Jul 2008 14:48:22 -0400
Subject: [Linux-cluster] Availability and Working of Service
In-Reply-To: <55459.36160.qm@web45809.mail.sp1.yahoo.com>
References: <55459.36160.qm@web45809.mail.sp1.yahoo.com>
Message-ID: <1215024502.23062.29.camel@localhost.localdomain>
On Fri, 2008-06-27 at 03:01 -0700, Mshehzad Pankhawala wrote:
> Hello Every one,
>
> I am planning to configure Asterisk Cluster using some of the
> clustering technologies (LVS or OpenSER or Heartbeat or any other
> thing).
>
> My problem is that Heartbeat and other component just check the
> availability of the Server which is to be clustered. But I also want
> the Service Asterisk should also be checked like Server is Answering
> the call properly, All the functionality of Asterisk Server is working
> properly or other services such as voice mail server (which is used by
> Asterisk Server) is running properly.
>
> Any body can guide me how to do that, is there any components, tools
> available, or Any Asterisk Specific tool to check Asterisk services
> etc. then please reply.
Cluster resource managers (one is included as part of heartbeat) can
certainly perform any check you can write a script for.
I would expect you start Asterisk with a script; e.g.:
/etc/init.d/asterisk start
That script probably also has a stop and a status action:
? /etc/init.d/asterisk stop
? /etc/init.d/asterisk status
The 'status' action can be used by heartbeat / rgmanager / etc. to check
the health of the asterisk server.
-- Lon
From ssingh at amnh.org Wed Jul 2 18:58:12 2008
From: ssingh at amnh.org (Sajesh Singh)
Date: Wed, 02 Jul 2008 14:58:12 -0400
Subject: [Linux-cluster] Multipathing, CLVM and GFS
Message-ID: <486BCFC4.9030203@amnh.org>
Centos 4.6
Cluster Suite
I am currently running a 2 node GFS cluster. The storage is provided via
a fiber channel connection to the SAN. Each node currently has a single
FC connection to the SAN. I would like to migrate to using dm-multipath
with each node having dual fiber channel connections to the SAN. Can I
assume that CLVM is aware of the /dev/dm-# devices that are used to
access the multipathed devices? Are there any gotchas that are
associated with installing the device-mapper-multipath software after
the GFS cluster is up and running? Are there any howtos available for
this type of setup?
Regards and TIA,
Sajesh Singh
From dirk.schulz at kinzesberg.de Wed Jul 2 18:55:17 2008
From: dirk.schulz at kinzesberg.de (Dirk H. Schulz)
Date: Wed, 02 Jul 2008 20:55:17 +0200
Subject: [Linux-cluster] Crashing machines with luci and ricci
Message-ID:
Hi folks,
I have tried setting up a cluster with ricci and luci. I have done the
following:
- set up 2 cluster nodes
- current patch level applied (5.2)
- installed ricci on the nodes and luci on a management station
- used luci web interface to setup the cluster
After initial setup luci stated that one node could not be reached or had
ricci not running. Both nodes were set up identical and could be reached
fine. ricci was running on both machines.
So I used the "restart the cluster" button - and that crashed both nodes
within 10 minutes. One machine was unreachable nearly at once, the other
had 100 % CPU load for several minutes before going down.
Now so far there is not much I could have done wrong (at least not
according to documentation). So I would like to know: Is this normal? Is
using ricci and luci a bad idea because they simply do not work?
Or the other way round: Are folks out there using these tools with positive
results - and are there nuts and bolts I could have avoided?
Any hint or help is appreciated.
Dirk
From ccaulfie at redhat.com Thu Jul 3 07:29:51 2008
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Thu, 03 Jul 2008 08:29:51 +0100
Subject: [Linux-cluster] Multipathing, CLVM and GFS
In-Reply-To: <486BCFC4.9030203@amnh.org>
References: <486BCFC4.9030203@amnh.org>
Message-ID: <486C7FEF.8070300@redhat.com>
Sajesh Singh wrote:
> Centos 4.6
> Cluster Suite
>
> I am currently running a 2 node GFS cluster. The storage is provided via
> a fiber channel connection to the SAN. Each node currently has a single
> FC connection to the SAN. I would like to migrate to using dm-multipath
> with each node having dual fiber channel connections to the SAN. Can I
> assume that CLVM is aware of the /dev/dm-# devices that are used to
> access the multipathed devices? Are there any gotchas that are
> associated with installing the device-mapper-multipath software after
> the GFS cluster is up and running? Are there any howtos available for
> this type of setup?
>
clvmd works fine with dm-multipath devices. You will probably have to
edit /etc/lvm/lvm.conf to exclude the underlying /dev/sd devices to stop
it getting confused though.
You won't be able to do this with GFS mounted on the local node though,
you'll have to umount it, setup dm-multipath, vgscan & remount. You CAN
leave them mounted on other nodes while you do it.
--
Chrissie
From grimme at atix.de Thu Jul 3 08:01:14 2008
From: grimme at atix.de (Marc Grimme)
Date: Thu, 3 Jul 2008 09:01:14 +0100
Subject: [Linux-cluster] Last and final official release candidate of the
com.oonics open shared root cluster installation DVD is
available (RC4)
Message-ID: <200807031001.14953.grimme@atix.de>
Hello,
we are very happy to announce the availability of the last and final official
release candidate of the com.oonics open shared root cluster installation DVD
(RC4).
The com.oonics open shared root cluster installation DVD allows the
installation of a single node open shared root cluster with the use of
anaconda, the well known installation software provided by Red Hat. After the
installation, the open shared root cluster can be easily scaled up to more
than hundred cluster nodes.
You can now download the open shared root installation DVD from
www.open-sharedroot.org.
We are very interested in feetback. Please either file a bug or feature or
post to the mailinglist (see www.open-sharedroot.org).
More details can be found here:
http://open-sharedroot.org/news-archive/availability-of-rc4-of-the-com-oonics-version-of-anaconda
Note: The download isos are based on Centos5.1!
RHEL5.1 versions will be provided on request.
Have fun testing it and let us know the what you're thinking.
--
Gruss / Regards,
Marc Grimme
http://www.atix.de/ http://www.open-sharedroot.org/
From garromo at us.ibm.com Thu Jul 3 12:42:37 2008
From: garromo at us.ibm.com (Gary Romo)
Date: Thu, 3 Jul 2008 06:42:37 -0600
Subject: [Linux-cluster] Cluster server maintenance
Message-ID:
I have a two node cluster, RHEL 5, Protocol version: 5.0.1. Can anyone
suggest
the best method, and or explain -u and -q fo the clusvcadm command to me?
Thanks!
Here is what I want to do:
1. Shutdown the services running; DBs, apps whatever...
2. I don't want the services starting on the other node, or anywhere.
3. We don't want any fencing to take place
4. We do our maintenance; Patch server, whatever...
5. Bring the services back up; DBs, apps whatever...
Now the only way I have found to do this so far is to disable the service.
# clusvcadm -d (and maybe that is the only answer)
Man pages do not provide much information
-u Unlock the cluster's service managers. This allows services to
transition again. It will be necessary to re-enable all services in the
stopped state if this is run after clushutdown.
Also a -q or quiet operation, which I am not finding any information
about.
# clusvcadm -h
Resource Group Control Commands:
clusvcadm -v Display version and exit
clusvcadm -d Disable
clusvcadm -e Enable on the local node
clusvcadm -e -F Enable according to
failover
domain rules
clusvcadm -e -m Enable on
clusvcadm -r -m Relocate [to ]
clusvcadm -q Quiet operation
clusvcadm -R Restart a group in place.
clusvcadm -s Stop
Resource Group Locking (for cluster Shutdown / Debugging):
clusvcadm -l Lock local resource group manager.
This prevents resource groups from
starting on the local node.
clusvcadm -S Show lock state
clusvcadm -u Unlock local resource group
manager.
This allows resource groups to start
on the local node.
Gary Romo
IBM Global Technology Services
303.458.4415
Email: garromo at us.ibm.com
Pager:1.877.552.9264
Text message: gromo at skytel.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lhh at redhat.com Thu Jul 3 17:36:53 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Thu, 03 Jul 2008 13:36:53 -0400
Subject: [Linux-cluster] Cluster server maintenance
In-Reply-To:
References:
Message-ID: <1215106613.23062.48.camel@localhost.localdomain>
On Thu, 2008-07-03 at 06:42 -0600, Gary Romo wrote:
> I have a two node cluster, RHEL 5, Protocol version: 5.0.1. Can anyone
> suggest
> the best method, and or explain -u and -q fo the clusvcadm command to
> me? Thanks!
>
> Here is what I want to do:
>
> 1. Shutdown the services running; DBs, apps whatever...
> 2. I don't want the services starting on the other node, or anywhere.
> 3. We don't want any fencing to take place
> 4. We do our maintenance; Patch server, whatever...
> 5. Bring the services back up; DBs, apps whatever...
> Now the only way I have found to do this so far is to disable the
> service.
>
> # clusvcadm -d (and maybe that is the only answer)
That's what it's for.
Stopping a service (clusvcadm -s) will stop the service until the next
member transition.
Disabling (-d) a service stops it until either quorum is broken or all
instances of rgmanager have been stopped. That is, as long as one
instance of rgmanager is operating and the cluster is quorate, the
service will remain disabled.
?Disabling autostart in Conga (or setting it to 0 in cluster.conf) for a
given service) means "on startup, treat this service as disabled instead
of stopped".
Locking rgmanager prevents failover, and is useful in mass simultaneous
shutdown operations, but less so for individual services. The manual
page needs updating; '-l' only needs to be done once.
[one node ] clusvcadm -l
[all nodes] service rgmanager stop
-q = "don't print stuff"
-- Lon
From theophanis_kontogiannis at yahoo.gr Sat Jul 5 15:46:16 2008
From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis)
Date: Sat, 5 Jul 2008 18:46:16 +0300
Subject: [Linux-cluster] Issue with clvmd - Is it really bug??
Message-ID: <01cb01c8deb6$44921e60$cdb65b20$@gr>
Hello,
I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2 with
DRBD
2.6.18-92.1.6.el5.centos.plus
drbd82-8.2.6-1.el5.centos
lvm2-2.02.32-4.el5
lvm2-cluster-2.02.32-4.el5
system-config-lvm-1.1.3-2.0.el5
I do not know if my problem is directly related to
http://kbase.redhat.com/faq/FAQ_51_10471.shtm and
https://bugzilla.redhat.com/show_bug.cgi?id=138396
I do:
pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1
vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1
lvcreate -v -L 348G -n data0 vg0
Then I reboot.
The LV never becomes available.
If I try
vgchange -a y
I get
Error locking on node tweety-1: Volume group for uuid not found:
7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa
0 logical volume(s) in volume group "vg0" now active
If I do
clvmd -R
Then with
vgchange -a y vg0.
the LV becomes available.
Is this really related to the above mentioned bug?
How can I make the LV become available during boot up without any
intervention?
Thank you all for your time,
Theophanis Kontogiannis
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From magawake at gmail.com Sun Jul 6 15:44:00 2008
From: magawake at gmail.com (Mag Gam)
Date: Sun, 6 Jul 2008 11:44:00 -0400
Subject: [Linux-cluster] GUI for cluster.conf
Message-ID: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
Can someone recommend a GUI to configure cluster.conf for me?
TIA
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at brimer.org Sun Jul 6 15:47:17 2008
From: lists at brimer.org (Barry Brimer)
Date: Sun, 6 Jul 2008 10:47:17 -0500 (CDT)
Subject: [Linux-cluster] GUI for cluster.conf
In-Reply-To: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
Message-ID:
> Can someone recommend a GUI to configure cluster.conf for me?
system-config-cluster
From td3201 at gmail.com Sun Jul 6 17:35:55 2008
From: td3201 at gmail.com (Terry)
Date: Sun, 6 Jul 2008 12:35:55 -0500
Subject: [Linux-cluster] GUI for cluster.conf
In-Reply-To:
References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
Message-ID: <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com>
On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote:
>> Can someone recommend a GUI to configure cluster.conf for me?
>
> system-config-cluster
>
For what it's worth, I tried both system-config-cluster and Conga and
found old fashioned command line tools to be more convenient.
Granted, their organization and naming conventions need some work but
after you use them a little while, you'll memorize them. Also, I
leaned heavily upon google to find all the configuration options as I
couldn't find much in the man pages.
From magawake at gmail.com Sun Jul 6 17:55:22 2008
From: magawake at gmail.com (Mag Gam)
Date: Sun, 6 Jul 2008 13:55:22 -0400
Subject: [Linux-cluster] GUI for cluster.conf
In-Reply-To: <8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com>
References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
<8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com>
Message-ID: <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com>
Thanks
On Sun, Jul 6, 2008 at 1:35 PM, Terry wrote:
> On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote:
> >> Can someone recommend a GUI to configure cluster.conf for me?
> >
> > system-config-cluster
> >
>
> For what it's worth, I tried both system-config-cluster and Conga and
> found old fashioned command line tools to be more convenient.
> Granted, their organization and naming conventions need some work but
> after you use them a little while, you'll memorize them. Also, I
> leaned heavily upon google to find all the configuration options as I
> couldn't find much in the man pages.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From magawake at gmail.com Sun Jul 6 18:00:10 2008
From: magawake at gmail.com (Mag Gam)
Date: Sun, 6 Jul 2008 14:00:10 -0400
Subject: [Linux-cluster] qdiskd question
Message-ID: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
I have a 8 node cluster with shared Hitachi SAN disk. On each disk I
created a 20M partition for qdisk , but only on 1 disk I created a
qdisk.
mkqdisk -c /dev/sda -l css
Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd,
etc..) or would I be find with only one disk?
Also, I suppose I need to make changes to cluster.conf after I do this, correct?
TIA
From bfields at fieldses.org Sun Jul 6 21:51:05 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Sun, 6 Jul 2008 17:51:05 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080627184117.GE19105@redhat.com>
References: <20080625224544.GJ12629@fieldses.org>
<20080626152733.GC21081@redhat.com>
<20080626183529.GD10593@fieldses.org>
<20080626191106.GA11945@fieldses.org>
<20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
Message-ID: <20080706215105.GA28037@fieldses.org>
On Fri, Jun 27, 2008 at 01:41:17PM -0500, David Teigland wrote:
> On Fri, Jun 27, 2008 at 01:28:56PM -0400, david m. richter wrote:
> > i also have another setup in vmware; while i doubt it's
> > substantively different than bruce's, i'm a ready and willing tester. is
> > there a different branch (or repo, or just a stack of patches somewhere)
> > that i should/could be using?
>
> If on 2.6.25, then use
>
> ftp://ftp%40openais%2Eorg:downloads at openais.org/downloads/openais-0.80.3/openais-0.80.3.tar.gz
> ftp://sources.redhat.com/pub/cluster/releases/cluster-2.03.04.tar.gz
>
> If on 2.6.26-rc, then you'll need to add the attached patch to cluster.
I tried that patch against STABLE2, and needed the following to get it
to compile.
diff --git a/group/gfs_controld/plock.c b/group/gfs_controld/plock.c
index 5e4f56b..f04a6b8 100644
--- a/group/gfs_controld/plock.c
+++ b/group/gfs_controld/plock.c
@@ -790,7 +790,7 @@ static void write_result(struct mountgroup *mg, struct dlm_plock_info *in,
in->fsid = mg->associated_ls_id;
in->rv = rv;
- write(control_fd, in, sizeof(struct gdlm_plock_info));
+ write(control_fd, in, sizeof(struct dlm_plock_info));
}
static void do_waiters(struct mountgroup *mg, struct resource *r)
I built everything with debugging turned on. The second mount again
hangs, with a lot of this in the logs:
Jul 1 14:06:42 piglet2 kernel: dlm: connecting to 1
Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
Jul 1 14:08:35 piglet2 kernel: INFO: task mount.gfs2:6130 blocked for more than 120 seconds.
Jul 1 14:08:35 piglet2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 1 14:08:35 piglet2 kernel: mount.gfs2 D c09f0244 1896 6130 6129
Jul 1 14:08:35 piglet2 kernel: ce920bc4 00000046 ce9d28e0 c09f0244 6f5e11cb 00000621 ce9d2b40 ce9d2b40
Jul 1 14:08:35 piglet2 kernel: 00000046 cf167db8 ce9d28e0 0077d2a4 00000000 6fd5e46f 00000621 ce9d28e0
Jul 1 14:08:35 piglet2 kernel: 00000003 ce9e7874 00000002 7fffffff ce920bec c063cdc5 7fffffff ce920be0
Jul 1 14:08:35 piglet2 kernel: Call Trace:
Jul 1 14:08:35 piglet2 kernel: [] schedule_timeout+0x75/0xb0
Jul 1 14:08:35 piglet2 kernel: [] ? trace_hardirqs_on+0x9d/0x110
Jul 1 14:08:35 piglet2 kernel: [] wait_for_common+0x9e/0x110
Jul 1 14:08:35 piglet2 kernel: [] ? default_wake_function+0x0/0x10
Jul 1 14:08:35 piglet2 kernel: [] wait_for_completion+0x12/0x20
Jul 1 14:08:35 piglet2 kernel: [] dlm_new_lockspace+0x766/0x7f0
Jul 1 14:08:35 piglet2 kernel: [] gdlm_mount+0x304/0x430
Jul 1 14:08:35 piglet2 kernel: [] gfs2_mount_lockproto+0x13f/0x160
Jul 1 14:08:35 piglet2 kernel: [] fill_super+0x3d2/0x6e0
Jul 1 14:08:35 piglet2 kernel: [] ? gfs2_glock_cb+0x0/0x150
Jul 1 14:08:35 piglet2 kernel: [] ? disk_name+0x25/0x90
Jul 1 14:08:35 piglet2 kernel: [] get_sb_bdev+0xef/0x120
Jul 1 14:08:35 piglet2 kernel: [] ? alloc_vfsmnt+0xd5/0x110
Jul 1 14:08:35 piglet2 kernel: [] gfs2_get_sb+0x15/0x40
Jul 1 14:08:35 piglet2 kernel: [] ? fill_super+0x0/0x6e0
Jul 1 14:08:35 piglet2 kernel: [] vfs_kern_mount+0x53/0x120
Jul 1 14:08:35 piglet2 kernel: [] do_kern_mount+0x31/0xc0
Jul 1 14:08:35 piglet2 kernel: [] do_new_mount+0x56/0x80
Jul 1 14:08:35 piglet2 kernel: [] do_mount+0x1c6/0x1f0
Jul 1 14:08:35 piglet2 kernel: [] ? cache_alloc_debugcheck_after+0x71/0x1a0
Jul 1 14:08:35 piglet2 kernel: [] ? __get_free_pages+0x1b/0x30
Jul 1 14:08:35 piglet2 kernel: [] ? copy_mount_options+0x2a/0x130
Jul 1 14:08:35 piglet2 kernel: [] sys_mount+0x6a/0xb0
Jul 1 14:08:35 piglet2 kernel: [] syscall_call+0x7/0xb
Jul 1 14:08:35 piglet2 kernel: =======================
Jul 1 14:08:35 piglet2 kernel: 4 locks held by mount.gfs2/6130:
Jul 1 14:08:35 piglet2 kernel: #0: (&type->s_umount_key#20){--..}, at: [] sget+0x176/0x360
Jul 1 14:08:35 piglet2 kernel: #1: (lmh_lock){--..}, at: [] gfs2_mount_lockproto+0x20/0x160
Jul 1 14:08:35 piglet2 kernel: #2: (&ls_lock){--..}, at: [] dlm_new_lockspace+0x1e/0x7f0
Jul 1 14:08:35 piglet2 kernel: #3: (&ls->ls_in_recovery){--..}, at: [] dlm_new_lockspace+0x5cf/0x7f0
Jul 1 14:10:44 piglet2 kernel: INFO: task mount.gfs2:6130 blocked for more than 120 seconds.
Jul 1 14:10:44 piglet2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 1 14:10:44 piglet2 kernel: mount.gfs2 D c09f0244 1896 6130 6129
So I gave up on this and tried going back to v2.6.25, and the suggested
cluster-2.03.04, but the second mounts still hang, and a sysrq-T trace
shows the mount system call hanging in dlm_new_workspace().
Since this I guess is a known-working set of software versions, I'm
assuming there's something wrong with my setup....
It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
in "D" state in dlm_rcom_status(), so I guess the second node isn't
getting some dlm reply it expects?
--b.
From pastany at gmail.com Mon Jul 7 02:47:44 2008
From: pastany at gmail.com (pastany)
Date: Mon, 7 Jul 2008 10:47:44 +0800
Subject: [Linux-cluster] gfs-6.1.5 problem
Message-ID: <200807071047416256159@gmail.com>
Hi everyone
after a power off, we cant mount our gfs partition
after gfs_fsck,it still not working.
here is the gfs_fsck output
gfs_fsck -vv /dev/mapper/vod-lv_vod
Initializing fsck
Initializing lists...
(bio.c:140) Writing to 65536 - 16 4096
Initializing special inodes...
(file.c:45) readi: Offset (640) is >= the file size (640).
(super.c:208) 8 journals found.
(file.c:45) readi: Offset (1210752) is >= the file size (1210752).
(super.c:265) 12612 resource groups found.
(util.c:112) For 238021862 Expected 1161970:3 - got 6617DE2F:9BC483A0
Buffer #238021862 (3 of 5) is neither GFS_METATYPE_RB nor
GFS_METATYPE_RG.
Resource group is corrupted.
Unable to read in rgrp descriptor.
Unable to fill in resource group information.
(initialize.c:388) - init_sbp()
any help is appreciated
pastany
2008-07-07
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccaulfie at redhat.com Mon Jul 7 07:23:58 2008
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Mon, 07 Jul 2008 08:23:58 +0100
Subject: [Linux-cluster] Issue with clvmd - Is it really bug??
In-Reply-To: <01cb01c8deb6$44921e60$cdb65b20$@gr>
References: <01cb01c8deb6$44921e60$cdb65b20$@gr>
Message-ID: <4871C48E.3030004@redhat.com>
Theophanis Kontogiannis wrote:
> Hello,
>
>
>
> I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2
> with DRBD
>
>
>
> 2.6.18-92.1.6.el5.centos.plus
>
> drbd82-8.2.6-1.el5.centos
>
> lvm2-2.02.32-4.el5
>
> lvm2-cluster-2.02.32-4.el5
>
> system-config-lvm-1.1.3-2.0.el5
>
>
>
> I do not know if my problem is directly related to
> http://kbase.redhat.com/faq/FAQ_51_10471.shtm and
> https://bugzilla.redhat.com/show_bug.cgi?id=138396
>
>
>
> I do:
>
>
>
> pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1
>
> vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1
>
> lvcreate -v -L 348G -n data0 vg0
>
>
>
> Then I reboot.
>
> The LV never becomes available.
>
>
>
> If I try
>
>
>
> vgchange -a y
>
>
>
> I get
>
>
>
> Error locking on node tweety-1: Volume group for uuid not found:
> 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa
>
> 0 logical volume(s) in volume group "vg0" now active
>
>
>
> If I do
>
>
>
> clvmd ?R
>
>
>
> Then with
>
>
>
> vgchange ?a y vg0.
>
>
>
> the LV becomes available.
>
>
>
> Is this really related to the above mentioned bug?
>
>
>
> How can I make the LV become available during boot up without any
> intervention?
>
>
>
> Thank you all for your time,
As you're using drbd for the PV, I think it might be to do startup
ordering. If drbd is started AFTER clvmd then it won't see the devices,
and you'll get exactly the symptoms you describe.
if you can, move drbd to before clvmd, or clvmd after drbd. Or, failing
that, put the extra commands you used above into their own startup script.
--
Chrissie
From theophanis_kontogiannis at yahoo.gr Mon Jul 7 09:05:23 2008
From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis)
Date: Mon, 7 Jul 2008 12:05:23 +0300
Subject: [Linux-cluster] Issue with clvmd - Is it really bug??
In-Reply-To: <4871C48E.3030004@redhat.com>
References: <01cb01c8deb6$44921e60$cdb65b20$@gr> <4871C48E.3030004@redhat.com>
Message-ID: <020d01c8e010$98e74110$cab5c330$@gr>
Hello Christine and All,
This was exactly the problem. The sequence of services startup. In the past I had fixed this. However and because the problems started after the update I did to the system, it never occurred to me that the problem might be the sequence of services startup. In fact I never looked in the /etc/rc3.d to take a look at it.
So because I never thought about this possibility, and because the problems started after the system update were lvm2 / clvm was also updated, it stuck in my mind that the problem was due to the new version of the clvmd and lvm2.
Thank you all for your time,
Sincerely,
Theophanis Kontogiannis
-----Original Message-----
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Christine Caulfield
Sent: Monday, July 07, 2008 10:24 AM
To: linux clustering
Subject: Re: [Linux-cluster] Issue with clvmd - Is it really bug??
Theophanis Kontogiannis wrote:
> Hello,
>
>
>
> I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2
> with DRBD
>
>
>
> 2.6.18-92.1.6.el5.centos.plus
>
> drbd82-8.2.6-1.el5.centos
>
> lvm2-2.02.32-4.el5
>
> lvm2-cluster-2.02.32-4.el5
>
> system-config-lvm-1.1.3-2.0.el5
>
>
>
> I do not know if my problem is directly related to
> http://kbase.redhat.com/faq/FAQ_51_10471.shtm and
> https://bugzilla.redhat.com/show_bug.cgi?id=138396
>
>
>
> I do:
>
>
>
> pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1
>
> vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1
>
> lvcreate -v -L 348G -n data0 vg0
>
>
>
> Then I reboot.
>
> The LV never becomes available.
>
>
>
> If I try
>
>
>
> vgchange -a y
>
>
>
> I get
>
>
>
> Error locking on node tweety-1: Volume group for uuid not found:
> 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa
>
> 0 logical volume(s) in volume group "vg0" now active
>
>
>
> If I do
>
>
>
> clvmd ?R
>
>
>
> Then with
>
>
>
> vgchange ?a y vg0.
>
>
>
> the LV becomes available.
>
>
>
> Is this really related to the above mentioned bug?
>
>
>
> How can I make the LV become available during boot up without any
> intervention?
>
>
>
> Thank you all for your time,
As you're using drbd for the PV, I think it might be to do startup
ordering. If drbd is started AFTER clvmd then it won't see the devices,
and you'll get exactly the symptoms you describe.
if you can, move drbd to before clvmd, or clvmd after drbd. Or, failing
that, put the extra commands you used above into their own startup script.
--
Chrissie
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
From ozgurakan at gmail.com Mon Jul 7 10:45:53 2008
From: ozgurakan at gmail.com (Ozgur Akan)
Date: Mon, 7 Jul 2008 06:45:53 -0400
Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2
Message-ID: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com>
Hi,
We keep "gfs_controld[3054]: plock result write err 0 errno 2" error
message
in message;
Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 errno 2
Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0 errno 2
Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0 errno 2
Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0 errno 2
Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0 errno 2
Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0 errno 2
It looks like happening every 15 minutes. Do you have any idea what this
means and how can I prevent from happening?
thanks,
Ozgur Akan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From swhiteho at redhat.com Mon Jul 7 10:44:11 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Mon, 07 Jul 2008 11:44:11 +0100
Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2
In-Reply-To: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com>
References: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com>
Message-ID: <1215427451.4011.121.camel@quoit>
Hi,
Are there any other messages in the logs? Which kernel version are you
using? Also do you think it might be similar to bz #454052?
Steve.
On Mon, 2008-07-07 at 06:45 -0400, Ozgur Akan wrote:
> Hi,
>
> We keep "gfs_controld[3054]: plock result write err 0 errno 2" error
> message
>
> in message;
>
> Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
> Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
> Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
> Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
> Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
> Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0
> errno 2
>
>
> It looks like happening every 15 minutes. Do you have any idea what
> this means and how can I prevent from happening?
>
> thanks,
> Ozgur Akan
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From nkhare.lists at gmail.com Mon Jul 7 11:05:25 2008
From: nkhare.lists at gmail.com (Neependra Khare)
Date: Mon, 07 Jul 2008 16:35:25 +0530
Subject: [Linux-cluster] qdiskd question
In-Reply-To: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
Message-ID: <4871F875.2020802@gmail.com>
Mag Gam wrote:
> I have a 8 node cluster with shared Hitachi SAN disk. On each disk I
> created a 20M partition for qdisk , but only on 1 disk I created a
> qdisk.
> mkqdisk -c /dev/sda -l css
>
> Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd,
> etc..) or would I be find with only one disk?
>
Have you created 8 separate partition for each node.
OR
One shared partition which is accessible to all the nodes.
Refer following for configuring quorum disks.
http://sources.redhat.com/cluster/wiki/FAQ/CMAN#quorum
http://www.redhatmagazine.com/2007/12/19/enhancing-cluster-quorum-with-qdisk/
> Also, I suppose I need to make changes to cluster.conf after I do this, correct?
>
Yes.
Neependra.
From vimal.jtech at gmail.com Mon Jul 7 12:27:45 2008
From: vimal.jtech at gmail.com (Vimal Gupta)
Date: Mon, 7 Jul 2008 12:27:45 +0000
Subject: [Linux-cluster] GUI for cluster.conf
In-Reply-To: <1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com>
References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
<8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com>
<1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com>
Message-ID: <437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com>
IF I am right , We also can use luci for that also .
On 7/6/08, Mag Gam wrote:
>
> Thanks
>
> On Sun, Jul 6, 2008 at 1:35 PM, Terry wrote:
>
>> On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer wrote:
>> >> Can someone recommend a GUI to configure cluster.conf for me?
>> >
>> > system-config-cluster
>> >
>>
>> For what it's worth, I tried both system-config-cluster and Conga and
>> found old fashioned command line tools to be more convenient.
>> Granted, their organization and naming conventions need some work but
>> after you use them a little while, you'll memorize them. Also, I
>> leaned heavily upon google to find all the configuration options as I
>> couldn't find much in the man pages.
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From teigland at redhat.com Mon Jul 7 15:48:28 2008
From: teigland at redhat.com (David Teigland)
Date: Mon, 7 Jul 2008 10:48:28 -0500
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080706215105.GA28037@fieldses.org>
References: <20080625224544.GJ12629@fieldses.org>
<20080626152733.GC21081@redhat.com>
<20080626183529.GD10593@fieldses.org>
<20080626191106.GA11945@fieldses.org>
<20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
Message-ID: <20080707154828.GB10404@redhat.com>
On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> - write(control_fd, in, sizeof(struct gdlm_plock_info));
> + write(control_fd, in, sizeof(struct dlm_plock_info));
Gah, sorry, I keep fixing that and it keeps reappearing.
> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> in "D" state in dlm_rcom_status(), so I guess the second node isn't
> getting some dlm reply it expects?
dlm inter-node communication is not working here for some reason. There
must be something unusual with the way the network is configured on the
nodes, and/or a problem with the way the cluster code is applying the
network config to the dlm.
Ah, I just remembered what this sounds like; we see this kind of thing
when a network interface has multiple IP addresses, and/or routing is
configured strangely. Others cc'ed could offer better details on exactly
what to look for.
Dave
From jparsons at redhat.com Mon Jul 7 15:54:07 2008
From: jparsons at redhat.com (jim parsons)
Date: Mon, 07 Jul 2008 11:54:07 -0400
Subject: [Linux-cluster] GUI for cluster.conf
In-Reply-To: <437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com>
References: <1cbd6f830807060844q5bffb5cbl7f5ebfedf0c935e1@mail.gmail.com>
<8ee061010807061035l3e48594xf4e3eaf904ec8cdf@mail.gmail.com>
<1cbd6f830807061055t635789acg3b7ddae0a165bee9@mail.gmail.com>
<437115c80807070527hd8f61eeg7394f0d396c6e249@mail.gmail.com>
Message-ID: <1215446047.3300.2.camel@localhost.localdomain>
On Mon, 2008-07-07 at 12:27 +0000, Vimal Gupta wrote:
>
> IF I am right , We also can use luci for that also .
Luci is the UI component of Conga.
Command line tools are great - but when you wish to do something like
restart all the cluster daemons on all of your nodes, using Conga can be
handy. It saves having to shell around to all of your nodes and execute
commands.
jmho,
-j
> On 7/6/08, Mag Gam wrote:
> Thanks
>
> On Sun, Jul 6, 2008 at 1:35 PM, Terry
> wrote:
> On Sun, Jul 6, 2008 at 10:47 AM, Barry Brimer
> wrote:
> >> Can someone recommend a GUI to configure
> cluster.conf for me?
> >
> > system-config-cluster
> >
>
>
> For what it's worth, I tried both
> system-config-cluster and Conga and
> found old fashioned command line tools to be more
> convenient.
> Granted, their organization and naming conventions
> need some work but
> after you use them a little while, you'll memorize
> them. Also, I
> leaned heavily upon google to find all the
> configuration options as I
> couldn't find much in the man pages.
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From teigland at redhat.com Mon Jul 7 16:46:13 2008
From: teigland at redhat.com (David Teigland)
Date: Mon, 7 Jul 2008 11:46:13 -0500
Subject: [Linux-cluster] gfs_controld plock result write err 0 errno 2
In-Reply-To: <1215427451.4011.121.camel@quoit>
References: <68f132770807070345w49c15103sb01102cdf601c080@mail.gmail.com>
<1215427451.4011.121.camel@quoit>
Message-ID: <20080707164613.GD10404@redhat.com>
On Mon, Jul 07, 2008 at 11:44:11AM +0100, Steven Whitehouse wrote:
> Hi,
>
> Are there any other messages in the logs? Which kernel version are you
> using? Also do you think it might be similar to bz #454052?
https://bugzilla.redhat.com/show_bug.cgi?id=446128
> On Mon, 2008-07-07 at 06:45 -0400, Ozgur Akan wrote:
> > Hi,
> >
> > We keep "gfs_controld[3054]: plock result write err 0 errno 2" error
> > message
> >
> > in message;
> >
> > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> > Jul 7 05:15:22 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> > Jul 7 05:30:07 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> > Jul 7 06:00:02 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> > Jul 7 06:00:03 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> > Jul 7 06:15:07 ops02 gfs_controld[3054]: plock result write err 0
> > errno 2
> >
> >
> > It looks like happening every 15 minutes. Do you have any idea what
> > this means and how can I prevent from happening?
> >
> > thanks,
> > Ozgur Akan
From lhh at redhat.com Mon Jul 7 17:22:51 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Mon, 07 Jul 2008 13:22:51 -0400
Subject: [Linux-cluster] qdiskd question
In-Reply-To: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
Message-ID: <1215451371.22549.77.camel@localhost.localdomain>
On Sun, 2008-07-06 at 14:00 -0400, Mag Gam wrote:
> I have a 8 node cluster with shared Hitachi SAN disk. On each disk I
> created a 20M partition for qdisk , but only on 1 disk I created a
> qdisk.
> mkqdisk -c /dev/sda -l css
>
> Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd,
> etc..) or would I be find with only one disk?
>
> Also, I suppose I need to make changes to cluster.conf after I do this, correct?
It currently doesn't support >1 disk.
-- Lon
From mdmunazir at gmail.com Mon Jul 7 18:40:26 2008
From: mdmunazir at gmail.com (Mohammed Munazir Ul Hasan)
Date: Mon, 7 Jul 2008 21:40:26 +0300
Subject: [Linux-cluster] qdiskd question
In-Reply-To: <1215451371.22549.77.camel@localhost.localdomain>
References: <1cbd6f830807061100g42976910p6448de59b7569bd7@mail.gmail.com>
<1215451371.22549.77.camel@localhost.localdomain>
Message-ID:
Hi All Administrator and Users,
Myself Mohammed Munazir working as a Linux Administrator in Saudi Arabia. I
am Redhat Certified Engineer.
My company is planning to go for RedHat Cluster for Webhosting Servers. We
have LAMP Server Configure.
As I am a fresher i never done RedHat Clustering. If anyone can help me
regarding this.
I need good document for Clustering and Storage Management. Good links.
If you all experts help me i will be very thankful to you.
Waiting for early and favorable reply from all of you.
Thanking You
Mohammed Munazir
On 7/7/08, Lon Hohberger wrote:
>
> On Sun, 2008-07-06 at 14:00 -0400, Mag Gam wrote:
> > I have a 8 node cluster with shared Hitachi SAN disk. On each disk I
> > created a 20M partition for qdisk , but only on 1 disk I created a
> > qdisk.
> > mkqdisk -c /dev/sda -l css
> >
> > Is it a good idea to create it on all disks? (/dev/sdb, sdc, sdd,
> > etc..) or would I be find with only one disk?
> >
> > Also, I suppose I need to make changes to cluster.conf after I do this,
> correct?
>
> It currently doesn't support >1 disk.
>
> -- Lon
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bfields at fieldses.org Mon Jul 7 18:49:28 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Mon, 7 Jul 2008 14:49:28 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080707154828.GB10404@redhat.com>
References: <20080626152733.GC21081@redhat.com>
<20080626183529.GD10593@fieldses.org>
<20080626191106.GA11945@fieldses.org>
<20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
Message-ID: <20080707184928.GE14291@fieldses.org>
On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > + write(control_fd, in, sizeof(struct dlm_plock_info));
>
> Gah, sorry, I keep fixing that and it keeps reappearing.
>
>
> > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>
> > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> > in "D" state in dlm_rcom_status(), so I guess the second node isn't
> > getting some dlm reply it expects?
>
> dlm inter-node communication is not working here for some reason. There
> must be something unusual with the way the network is configured on the
> nodes, and/or a problem with the way the cluster code is applying the
> network config to the dlm.
>
> Ah, I just remembered what this sounds like; we see this kind of thing
> when a network interface has multiple IP addresses, and/or routing is
> configured strangely. Others cc'ed could offer better details on exactly
> what to look for.
OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
neither, and it's entirely likely there's some obvious misconfiguration.
On the kvm host there are 4 virtual interfaces bridged together:
bfields at pig:~$ brctl show
bridge name bridge id STP enabled interfaces
vnet0 8000.00ff0823c0f3 yes vnet1
vnet2
vnet3
vnet4
vnet0 has address 192.168.122.1 on the host, and the 4 kvm guests are
statically assigned addresses 129, 130, 131, and 132 on the 192.168.122.*
network, so a kvm guest looks like:
piglet1:~# ifconfig
eth1 Link encap:Ethernet HWaddr 00:16:3e:16:4d:61
inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe16:4d61/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2464 errors:0 dropped:0 overruns:0 frame:0
TX packets:1806 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:197099 (192.4 KiB) TX bytes:165606 (161.7 KiB)
Interrupt:11 Base address:0xc100
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:285 errors:0 dropped:0 overruns:0 frame:0
TX packets:285 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13394 (13.0 KiB) TX bytes:13394 (13.0 KiB)
piglet1:~# cat /etc/hosts
127.0.0.1 localhost
192.168.122.129 piglet1
192.168.122.130 piglet2
192.168.122.131 piglet3
192.168.122.132 piglet4
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
The network setup looks otherwise fine--they can all ping each other and
the outside world.
--b.
From ozgurakan at gmail.com Mon Jul 7 21:18:07 2008
From: ozgurakan at gmail.com (Ozgur Akan)
Date: Mon, 7 Jul 2008 17:18:07 -0400
Subject: [Linux-cluster] quota and noatime configurations
Message-ID: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
Hi,
Even I have ' options="noatime,quota=off" ' in my cluster.conf file,
I see [gfs2_quotad] running and
I can not see quota in mtab file
/dev/mapper/vg_bbn-lv_aas /my/home gfs2
rw,noatime,hostdata=jid=0:id=196610:first=1
0 0
is this normal?
thanks,
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From shawnlhood at gmail.com Mon Jul 7 21:22:41 2008
From: shawnlhood at gmail.com (Shawn Hood)
Date: Mon, 7 Jul 2008 17:22:41 -0400
Subject: [Linux-cluster] quota and noatime configurations
In-Reply-To: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
Message-ID:
Have you tried noquota?
2008/7/7 Ozgur Akan :
> Hi,
>
> Even I have ' options="noatime,quota=off" ' in my cluster.conf file,
> I see [gfs2_quotad] running and
>
> I can not see quota in mtab file
> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id
> =196610:first=1 0 0
>
> is this normal?
>
> thanks,
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
--
Shawn Hood
910.670.1819 m
From swhiteho at redhat.com Tue Jul 8 09:43:56 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Tue, 08 Jul 2008 10:43:56 +0100
Subject: [Linux-cluster] quota and noatime configurations
In-Reply-To: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
Message-ID: <1215510236.3475.0.camel@localhost.localdomain>
Hi,
On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote:
> Hi,
>
> Even I have ' options="noatime,quota=off" ' in my cluster.conf
> file,
> I see [gfs2_quotad] running and
>
> I can not see quota in mtab file
> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id
> =196610:first=1 0 0
>
> is this normal?
>
Yes, it will likely only appear if you turn it on since the default is
off,
Steve.
> thanks,
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From kees at tweakers.net Tue Jul 8 12:25:31 2008
From: kees at tweakers.net (Kees Hoekzema)
Date: Tue, 8 Jul 2008 14:25:31 +0200
Subject: [Linux-cluster] Freezing GFS mount in a cluster
Message-ID: <004a01c8e0f5$b6d8ccd0$248a6670$@net>
Hello List,
Recently we bought an Dell MD3000 iSCSI storage system and we are trying to
get GFS running on it. I have 3 test servers hooked up to the MD3000i and I
have the cluster working, including multipath and different paths.
When I had the cluster up with all 3 nodes in the fence domain and cman_tool
status reporting 3 nodes I made a GFS partition and formatted it:
# gfs_mkfs -j 10 -p lock_dlm -t tweakers:webdata /dev/mapper/webdata-part1
This worked and I could mount the filesystem on the server I made it on.
However, as soon as I tried to mount it on one of the two other servers, I
would get a freeze and get fenced. After a fresh reboot of the complete
cluster I tried to mount it again. The first server could mount it, but any
server that would try to mount it with the first server having the gfs
mounted would crash.
As I'm fairly new to cman/fencing/gfs-clusters, I was wondering if this is
something 'silly' configuration error, or that there is something seriously
wrong.
Another thing I would like to know is where to get debug information. Right
now there is not a lot debug information available, or at least I couldn't
find it. One thing that particularly annoyed me was the ' Waiting for fenced
to join the fence group.' message which didn't come with any explanation
whatsoever. That message finally went away when I powered up the two other
servers and started the cluster on all three simultaneously.
Anyway, my cluster config for this testing. I use manual fencing for
testing as the environment I test it in does not have exactly the same
hardware as I have in the production environment.
Conclusion:
- why can't I mount GFS on another server, when it is mounted on one?
- how do I get more debug information (ie: reason why a server can't join a
fence domein. Or the reason why a server gets fenced).
Thank you all for your time,
Kees Hoekzema
From andy at andrewprice.me.uk Tue Jul 8 17:12:56 2008
From: andy at andrewprice.me.uk (Andrew Price)
Date: Tue, 08 Jul 2008 18:12:56 +0100
Subject: [Linux-cluster] Re: quota and noatime configurations
In-Reply-To: <1215510236.3475.0.camel@localhost.localdomain>
References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
<1215510236.3475.0.camel@localhost.localdomain>
Message-ID:
On 08/07/08 10:43, Steven Whitehouse wrote:
> On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote:
>> Hi,
>>
>> Even I have ' options="noatime,quota=off" ' in my cluster.conf
>> file,
>> I see [gfs2_quotad] running and
>>
>> I can not see quota in mtab file
>> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id
>> =196610:first=1 0 0
>>
>> is this normal?
>>
> Yes, it will likely only appear if you turn it on since the default is
> off,
If I'm reading the code correctly, gfs2_quotad is always started
regardless of the quota options.
--
Andy Price
From ozgurakan at gmail.com Tue Jul 8 17:20:30 2008
From: ozgurakan at gmail.com (Ozgur Akan)
Date: Tue, 8 Jul 2008 13:20:30 -0400
Subject: [Linux-cluster] lock_dlm to lock_nolock
Message-ID: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com>
Hi,
Can I mount a gfs filesystem formatted with lock_dlmlock and use it without
a problem in the cluster if I have proper fencing and that fs is mounted to
only one node at a time?
mount -o lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/
[root at rhtest01 ~]# ./ping -rw /gfstwo/test 1
data increment = 1
140012 locks/sec
[root at rhtest01 ~]# gfs2_tool df /gfstwo/
/gfstwo:
SB lock proto = "lock_dlm"
SB lock table = "testcluster:gfstwo"
SB ondisk format = 1801
SB multihost format = 1900
Block size = 4096
Journals = 3
Resource Groups = 60
Mounted lock proto = "lock_nolock"
Mounted lock table = "testcluster:gfstwo"
Mounted host data = ""
Journal number = 0
Lock module flags = 1
Local flocks = TRUE
thanks,
Ozgur Akan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ozgurakan at gmail.com Tue Jul 8 17:31:25 2008
From: ozgurakan at gmail.com (Ozgur Akan)
Date: Tue, 8 Jul 2008 13:31:25 -0400
Subject: [Linux-cluster] lock_dlm to lock_nolock
Message-ID: <68f132770807081031p56400c11xd2792659537e5ef6@mail.gmail.com>
(sorry if you get this email twice)
Hi,
Can I mount a gfs filesystem formatted with lock_dlmlock and use it without
a problem in the cluster if I have proper fencing and that fs is mounted to
only one node at a time?
mount -o lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/
[root at rhtest01 ~]# ./ping -rw /gfstwo/test 1
data increment = 1
140012 locks/sec
[root at rhtest01 ~]# gfs2_tool df /gfstwo/
/gfstwo:
SB lock proto = "lock_dlm"
SB lock table = "testcluster:gfstwo"
SB ondisk format = 1801
SB multihost format = 1900
Block size = 4096
Journals = 3
Resource Groups = 60
Mounted lock proto = "lock_nolock"
Mounted lock table = "testcluster:gfstwo"
Mounted host data = ""
Journal number = 0
Lock module flags = 1
Local flocks = TRUE
thanks,
Ozgur Akan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bfields at fieldses.org Tue Jul 8 22:15:33 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Tue, 8 Jul 2008 18:15:33 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080707184928.GE14291@fieldses.org>
References: <20080626183529.GD10593@fieldses.org>
<20080626191106.GA11945@fieldses.org>
<20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
Message-ID: <20080708221533.GI15038@fieldses.org>
On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > > + write(control_fd, in, sizeof(struct dlm_plock_info));
> >
> > Gah, sorry, I keep fixing that and it keeps reappearing.
> >
> >
> > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> >
> > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> > > in "D" state in dlm_rcom_status(), so I guess the second node isn't
> > > getting some dlm reply it expects?
> >
> > dlm inter-node communication is not working here for some reason. There
> > must be something unusual with the way the network is configured on the
> > nodes, and/or a problem with the way the cluster code is applying the
> > network config to the dlm.
> >
> > Ah, I just remembered what this sounds like; we see this kind of thing
> > when a network interface has multiple IP addresses, and/or routing is
> > configured strangely. Others cc'ed could offer better details on exactly
> > what to look for.
>
> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
> neither, and it's entirely likely there's some obvious misconfiguration.
> On the kvm host there are 4 virtual interfaces bridged together:
I ran wireshark on vnet0 while doing the second mount; what I saw was
the second machine opened a tcp connection to port 21064 on the first
(which had already completed the mount), and sent it a single message
identified by wireshark as "DLM3" protocol, type recovery command:
status command. It got back an ACK then a RST.
Then the same happened in the other direction, with the first machine
sending a similar message to port 21064 on the second, which then reset
the connection.
--b.
>
> bfields at pig:~$ brctl show
> bridge name bridge id STP enabled interfaces
> vnet0 8000.00ff0823c0f3 yes vnet1
> vnet2
> vnet3
> vnet4
>
> vnet0 has address 192.168.122.1 on the host, and the 4 kvm guests are
> statically assigned addresses 129, 130, 131, and 132 on the 192.168.122.*
> network, so a kvm guest looks like:
>
> piglet1:~# ifconfig
> eth1 Link encap:Ethernet HWaddr 00:16:3e:16:4d:61
> inet addr:192.168.122.129 Bcast:192.168.122.255 Mask:255.255.255.0
> inet6 addr: fe80::216:3eff:fe16:4d61/64 Scope:Link
> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
> RX packets:2464 errors:0 dropped:0 overruns:0 frame:0
> TX packets:1806 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:1000
> RX bytes:197099 (192.4 KiB) TX bytes:165606 (161.7 KiB)
> Interrupt:11 Base address:0xc100
>
> lo Link encap:Local Loopback
> inet addr:127.0.0.1 Mask:255.0.0.0
> inet6 addr: ::1/128 Scope:Host
> UP LOOPBACK RUNNING MTU:16436 Metric:1
> RX packets:285 errors:0 dropped:0 overruns:0 frame:0
> TX packets:285 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:13394 (13.0 KiB) TX bytes:13394 (13.0 KiB)
>
> piglet1:~# cat /etc/hosts
> 127.0.0.1 localhost
> 192.168.122.129 piglet1
> 192.168.122.130 piglet2
> 192.168.122.131 piglet3
> 192.168.122.132 piglet4
>
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> ff02::3 ip6-allhosts
>
> The network setup looks otherwise fine--they can all ping each other and
> the outside world.
>
> --b.
From ajeet.singh.raina at logica.com Wed Jul 9 06:02:49 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 11:32:49 +0530
Subject: [Linux-cluster] Setting Up Two Node Cluster..
Message-ID: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com>
Hello Guys,
I am totally in clustering field and recently involved in a project
related to Setting Up a Red Hat Cluster.
I have two RHEL 4.0 Update 2 Servers which I have installed with the
following packages each :
ccs-1.0.6-0.x86_64.rpm
cman-1.0.8-0.x86_64.rpm
cman-kernel-smp-2.6.9-39.5.x86_64.rpm
cman-kernel-smp-2.6.9-44.7.x86_64.rpm
device-mapper-1.02.25-1.el4.x86_64.rpm
dlm-1.0.1-1.x86_64.rpm
dlm-kernel-smp-2.6.9-37.7.x86_64.rpm
dlm-kernel-smp-2.6.9-39.1.x86_64.rpm
dlm-kernel-smp-2.6.9-42.7.x86_64.rpm
dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm
lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm
lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm
lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm
lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.21-7.el4.x86_64.rpm
lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm
magma-1.0.5-0.x86_64.rpm
magma-plugins-1.0.8-0.x86_64.rpm
rgmanager-1.9.50-0.x86_64.rpm
system-config-cluster-1.0.27-1.0.noarch.rpm
system-config-cluster-1[1].0.27-1.0.noarch.rpm
perl-Crypt-SSLeay-0.51-5.x86_64.rpm
On 10.14.236.106 I ran # system-config-cluster and I added the two Node
- One itself(10.14.236.106) and the other(10.14.236.108).
I added The ILO as my Fencing Device providing the right credentials.I
dint added any Resource and Service as I just want to test whether the
two amchines sees wach other or not.
I saved the file and it gave me cluster.conf.
Next I ran
#service ccsd start
#service cman start
That Brought out Cluster Management Option next to Cluster Configuration
label.
I transported the cluster.conf manually through scp to the next machine.
Now I too ran the ccsd and cman on the other machine.
Then I ran
#service fenced start
#service rgmanager start
One by one to the two machine.
When I ran the command:
Member Status: Quorate
Member Name Status
------ ---- ------
BL02DL385 Online, rgmanager
BL01DL385 Online, Local, rgmanager
[root at BL01DL385 ~]#
So My Nodes are seeing each other.Upto this Its Fine.
Now I have one script called tester.sh placed in 106 machine and All I
am adding it to Script Section under Resource giving the full path.
Now Again I am restarting the service in order.
Now The Cluster.conf file is same in both the system
Say,if I reboot the 106 system, Will the next Server show running the
script?????
Please Advise.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Wed Jul 9 06:04:32 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 11:34:32 +0530
Subject: [Linux-cluster] RE: Setting Up Two Node Cluster..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB20808C170@in-ex004.groupinfra.com>
FYI I have no Shared Storage. Is it needed in this scenario?
What Could be the right alternative?
________________________________
From: Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 11:33 AM
To: 'linux-cluster at redhat.com'
Subject: Setting Up Two Node Cluster..
Hello Guys,
I am totally in clustering field and recently involved in a project
related to Setting Up a Red Hat Cluster.
I have two RHEL 4.0 Update 2 Servers which I have installed with the
following packages each :
ccs-1.0.6-0.x86_64.rpm
cman-1.0.8-0.x86_64.rpm
cman-kernel-smp-2.6.9-39.5.x86_64.rpm
cman-kernel-smp-2.6.9-44.7.x86_64.rpm
device-mapper-1.02.25-1.el4.x86_64.rpm
dlm-1.0.1-1.x86_64.rpm
dlm-kernel-smp-2.6.9-37.7.x86_64.rpm
dlm-kernel-smp-2.6.9-39.1.x86_64.rpm
dlm-kernel-smp-2.6.9-42.7.x86_64.rpm
dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm
lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm
lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm
lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm
lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.21-7.el4.x86_64.rpm
lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm
magma-1.0.5-0.x86_64.rpm
magma-plugins-1.0.8-0.x86_64.rpm
rgmanager-1.9.50-0.x86_64.rpm
system-config-cluster-1.0.27-1.0.noarch.rpm
system-config-cluster-1[1].0.27-1.0.noarch.rpm
perl-Crypt-SSLeay-0.51-5.x86_64.rpm
On 10.14.236.106 I ran # system-config-cluster and I added the two Node
- One itself(10.14.236.106) and the other(10.14.236.108).
I added The ILO as my Fencing Device providing the right credentials.I
dint added any Resource and Service as I just want to test whether the
two amchines sees wach other or not.
I saved the file and it gave me cluster.conf.
Next I ran
#service ccsd start
#service cman start
That Brought out Cluster Management Option next to Cluster Configuration
label.
I transported the cluster.conf manually through scp to the next machine.
Now I too ran the ccsd and cman on the other machine.
Then I ran
#service fenced start
#service rgmanager start
One by one to the two machine.
When I ran the command:
Member Status: Quorate
Member Name Status
------ ---- ------
BL02DL385 Online, rgmanager
BL01DL385 Online, Local, rgmanager
[root at BL01DL385 ~]#
So My Nodes are seeing each other.Upto this Its Fine.
Now I have one script called tester.sh placed in 106 machine and All I
am adding it to Script Section under Resource giving the full path.
Now Again I am restarting the service in order.
Now The Cluster.conf file is same in both the system
Say,if I reboot the 106 system, Will the next Server show running the
script?????
Please Advise.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nkhare.lists at gmail.com Wed Jul 9 06:55:30 2008
From: nkhare.lists at gmail.com (Neependra Khare)
Date: Wed, 09 Jul 2008 12:25:30 +0530
Subject: [Linux-cluster] Setting Up Two Node Cluster..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB20808C16F@in-ex004.groupinfra.com>
Message-ID: <487460E2.7070800@gmail.com>
Singh Raina, Ajeet wrote:
>
> So My Nodes are seeing each other.Upto this Its Fine.
>
> Now I have one script called tester.sh placed in 106 machine and All I
> am adding it to Script Section under Resource giving the full path.
>
I think you need to attach that script resource to a service , so that
rgmanager can check the status at regular interval.
Make sure the script is LSB compliant.
http://refspecs.freestandards.org/LSB_2.0.1/LSB-Core/LSB-Core/iniscrptact.html
http://sources.redhat.com/cluster/wiki/FAQ/RGManager#rgm_wontrestart
>
> Now Again I am restarting the service in order.
>
> Now The Cluster.conf file is same in both the system
>
> Say,if I reboot the 106 system, Will the next Server show running the
> script?????
>
>
The question is not clear to me.Can you please give more details?
Neependra
From swhiteho at redhat.com Wed Jul 9 08:44:24 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Wed, 09 Jul 2008 09:44:24 +0100
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080708221533.GI15038@fieldses.org>
References: <20080626183529.GD10593@fieldses.org>
<20080626191106.GA11945@fieldses.org>
<20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
Message-ID: <1215593064.3411.6.camel@localhost.localdomain>
Hi,
On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> > > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > > > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > > > + write(control_fd, in, sizeof(struct dlm_plock_info));
> > >
> > > Gah, sorry, I keep fixing that and it keeps reappearing.
> > >
> > >
> > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> > >
> > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> > > > in "D" state in dlm_rcom_status(), so I guess the second node isn't
> > > > getting some dlm reply it expects?
> > >
> > > dlm inter-node communication is not working here for some reason. There
> > > must be something unusual with the way the network is configured on the
> > > nodes, and/or a problem with the way the cluster code is applying the
> > > network config to the dlm.
> > >
> > > Ah, I just remembered what this sounds like; we see this kind of thing
> > > when a network interface has multiple IP addresses, and/or routing is
> > > configured strangely. Others cc'ed could offer better details on exactly
> > > what to look for.
> >
> > OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
> > neither, and it's entirely likely there's some obvious misconfiguration.
> > On the kvm host there are 4 virtual interfaces bridged together:
>
> I ran wireshark on vnet0 while doing the second mount; what I saw was
> the second machine opened a tcp connection to port 21064 on the first
> (which had already completed the mount), and sent it a single message
> identified by wireshark as "DLM3" protocol, type recovery command:
> status command. It got back an ACK then a RST.
>
> Then the same happened in the other direction, with the first machine
> sending a similar message to port 21064 on the second, which then reset
> the connection.
>
> --b.
>
An ACK & RST for the same packet? Or was than an ACK SYN for the SYN and
then an RST for the following data packet? Could you post the trace or
put it somewhere we can see it?
Steve.
From ccaulfie at redhat.com Wed Jul 9 08:51:02 2008
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Wed, 09 Jul 2008 09:51:02 +0100
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <1215593064.3411.6.camel@localhost.localdomain>
References: <20080626183529.GD10593@fieldses.org> <20080626191106.GA11945@fieldses.org> <20080626203315.GB13293@fieldses.org> <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
Message-ID: <48747BF6.2060001@redhat.com>
Steven Whitehouse wrote:
> Hi,
>
> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>
>>>>
>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>> getting some dlm reply it expects?
>>>> dlm inter-node communication is not working here for some reason. There
>>>> must be something unusual with the way the network is configured on the
>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>> network config to the dlm.
>>>>
>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>> when a network interface has multiple IP addresses, and/or routing is
>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>> what to look for.
>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>> On the kvm host there are 4 virtual interfaces bridged together:
>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>> the second machine opened a tcp connection to port 21064 on the first
>> (which had already completed the mount), and sent it a single message
>> identified by wireshark as "DLM3" protocol, type recovery command:
>> status command. It got back an ACK then a RST.
>>
>> Then the same happened in the other direction, with the first machine
>> sending a similar message to port 21064 on the second, which then reset
>> the connection.
>>
That's a symptom of the "connect from non-cluster node" error in the
DLM. It's got a connection from an IP address that is not known to cman.
So it closes it as a spoofer.
You'll need to check the routing of the interfaces. The most common
cause of this sort of error is having two interfaces on the same
physical (or internal) network.
--
Chrissie
From swhiteho at redhat.com Wed Jul 9 08:50:31 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Wed, 09 Jul 2008 09:50:31 +0100
Subject: [Linux-cluster] lock_dlm to lock_nolock
In-Reply-To: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com>
References: <68f132770807081020s206ec2bdg2157fe303f2819cb@mail.gmail.com>
Message-ID: <1215593431.3411.9.camel@localhost.localdomain>
Hi,
On Tue, 2008-07-08 at 13:20 -0400, Ozgur Akan wrote:
> Hi,
>
> Can I mount a gfs filesystem formatted with lock_dlmlock and use it
> without a problem in the cluster if I have proper fencing and that fs
> is mounted to only one node at a time?
>
Single node DLM is quite possible, and I use it for testing from time to
time. Below though you appear to be using lock_nolock which is also ok
provided you only use it on one node at a time,
Steve.
> mount -o
> lockproto=lock_nolock /dev/mapper/cluster_vg-test2_lv /gfstwo/
>
>
> [root at rhtest01 ~]# ./ping -rw /gfstwo/test 1
> data increment = 1
> 140012 locks/sec
> [root at rhtest01 ~]# gfs2_tool df /gfstwo/
> /gfstwo:
> SB lock proto = "lock_dlm"
> SB lock table = "testcluster:gfstwo"
> SB ondisk format = 1801
> SB multihost format = 1900
> Block size = 4096
> Journals = 3
> Resource Groups = 60
> Mounted lock proto = "lock_nolock"
> Mounted lock table = "testcluster:gfstwo"
> Mounted host data = ""
> Journal number = 0
> Lock module flags = 1
> Local flocks = TRUE
>
>
> thanks,
> Ozgur Akan
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
From swhiteho at redhat.com Wed Jul 9 08:51:01 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Wed, 09 Jul 2008 09:51:01 +0100
Subject: [Linux-cluster] Re: quota and noatime configurations
In-Reply-To:
References: <68f132770807071418w3e914c81nd4b15e08ad64669c@mail.gmail.com>
<1215510236.3475.0.camel@localhost.localdomain>
Message-ID: <1215593461.3411.11.camel@localhost.localdomain>
Hi,
On Tue, 2008-07-08 at 18:12 +0100, Andrew Price wrote:
> On 08/07/08 10:43, Steven Whitehouse wrote:
> > On Mon, 2008-07-07 at 17:18 -0400, Ozgur Akan wrote:
> >> Hi,
> >>
> >> Even I have ' options="noatime,quota=off" ' in my cluster.conf
> >> file,
> >> I see [gfs2_quotad] running and
> >>
> >> I can not see quota in mtab file
> >> /dev/mapper/vg_bbn-lv_aas /my/home gfs2 rw,noatime,hostdata=jid=0:id
> >> =196610:first=1 0 0
> >>
> >> is this normal?
> >>
> > Yes, it will likely only appear if you turn it on since the default is
> > off,
>
> If I'm reading the code correctly, gfs2_quotad is always started
> regardless of the quota options.
>
Yes, thats also true,
Steve.
From ajeet.singh.raina at logica.com Wed Jul 9 09:56:40 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 15:26:40 +0530
Subject: [Linux-cluster] Alternative to Shared Storage..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over
Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know
you people gonna help me out.Since This was small two Node Cluster
Setup.I have few script which I am running on one primary server and on
disabling Ethernet on one , the other is taking responsibility to start
the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative
for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about
that?
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Wed Jul 9 09:58:37 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 15:28:37 +0530
Subject: [Linux-cluster] Setting Up Two Node Cluster..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17931@in-ex004.groupinfra.com>
It Done.Just Started the Service on both the node and Failover is taking
place.
Thanks anyway.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Prakash.P at lsi.com Wed Jul 9 10:09:41 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Wed, 9 Jul 2008 18:09:41 +0800
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com>
Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume.
And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From breeves at redhat.com Wed Jul 9 10:09:00 2008
From: breeves at redhat.com (Bryn M. Reeves)
Date: Wed, 09 Jul 2008 11:09:00 +0100
Subject: [Linux-cluster] Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
Message-ID: <48748E3C.5060002@redhat.com>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Singh Raina, Ajeet wrote:
> Hello Guys,
>
>
>
> Just Now I have been successful in configuring the two Node Fail-over Cluster.
> It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna
You probably want to evaluate something a little newer - RHEL-4.2 was
released some time ago and there have been significant fixes and feature
enhancements in the releases since that time.
> Now Let me tell you I don?t have Shared Storage.Is there any alternative for that.
>
> Somewhere I read about iSCSI but donnno whether it will be helpful.
I use software-based iSCSI on pretty much all my test systems - it works
great. You need the iSCSI initiator package installed on the systems
that will import the devices and an iSCSI target installed on the host
that exports the storage. There are several target projects out there in
varying states of completeness and functionality. I've used iet (iSCSI
enterprise target) on RHEL4 and there is now also stgt (scsi target
utils) which is included in the Cluster Storage channel for RHEL5.
> Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
http://stgt.berlios.de/
http://iscsitarget.sourceforge.net/
RHEL5 also supports installing to and booting from software iSCSI targets.
Regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
iEYEARECAAYFAkh0jjwACgkQ6YSQoMYUY94AnACgnmUhUZ1vB8lqH2je14KdJEu5
p/IAoNfzvAiW1YGPFwahk5PAcXfVYzu/
=ZHpD
-----END PGP SIGNATURE-----
From ajeet.singh.raina at logica.com Wed Jul 9 10:54:04 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 16:24:04 +0530
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com>
Hi,
I would like to go for iSCSI Configuration.That sound good.Atleast I
will learn something new.
Can You provide me with steps by steps docs.
One more thing - What Minimum Size of Hard Disk we need for That?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 3:40 PM
To: linux clustering
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
Yes you can do with iSCSI. You need to install iSCSI software target on
the spare RHEL machine & configure the disk space as virtual SCSI
volume.
And on the both machines of the two node cluster you need to install
iSCSI initiators establish iSCSI session with your target server & u can
see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS,
simply create a NFS share using the spare RHEL machine & export it to
both the nodes of cluster. On Cluster nodes create some NFS resources
for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over
Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know
you people gonna help me out.Since This was small two Node Cluster
Setup.I have few script which I am running on one primary server and on
disabling Ethernet on one , the other is taking responsibility to start
the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative
for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about
that?
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Wed Jul 9 10:56:55 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 16:26:55 +0530
Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com>
Just for Information, How Will we configure NAS Concept you said
earlier.
What Should I share actually?
Are you Talking about the Script?
________________________________
From: Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 4:24 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage..
Hi,
I would like to go for iSCSI Configuration.That sound good.Atleast I
will learn something new.
Can You provide me with steps by steps docs.
One more thing - What Minimum Size of Hard Disk we need for That?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 3:40 PM
To: linux clustering
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
Yes you can do with iSCSI. You need to install iSCSI software target on
the spare RHEL machine & configure the disk space as virtual SCSI
volume.
And on the both machines of the two node cluster you need to install
iSCSI initiators establish iSCSI session with your target server & u can
see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS,
simply create a NFS share using the spare RHEL machine & export it to
both the nodes of cluster. On Cluster nodes create some NFS resources
for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over
Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know
you people gonna help me out.Since This was small two Node Cluster
Setup.I have few script which I am running on one primary server and on
disabling Ethernet on one , the other is taking responsibility to start
the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative
for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about
that?
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From breeves at redhat.com Wed Jul 9 10:54:51 2008
From: breeves at redhat.com (Bryn M. Reeves)
Date: Wed, 09 Jul 2008 11:54:51 +0100
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com>
Message-ID: <487498FB.50907@redhat.com>
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Singh Raina, Ajeet wrote:
> Hi,
>
>
>
> I would like to go for iSCSI Configuration.That sound good.Atleast I will learn
> something new.
>
> Can You provide me with steps by steps docs.
>
> One more thing ? What Minimum Size of Hard Disk we need for That?
You don't - you can create iSCSI devices using either disk partitions if
you have some spare, or just a file located in any file system with
enough free space. I often do testing with iSCSI devices that are just a
few 10s of MiB in size.
Regards,
Bryn.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
iEYEARECAAYFAkh0mPsACgkQ6YSQoMYUY94JcwCgqF2K9a8GrrHLfdW9a9LLqrjt
b/wAoLjvKMIA2l0NOBc8+fYl2zzGg7t7
=lzRW
-----END PGP SIGNATURE-----
From Prakash.P at lsi.com Wed Jul 9 11:34:41 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Wed, 9 Jul 2008 19:34:41 +0800
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com>
References: <2B52F34989FB054FAF95019F74B992D50539BC7A9F@hkgmail01.lsi.com>
<0139539A634FD04A99C9B8880AB70CB209B17932@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7AC0@hkgmail01.lsi.com>
Google iSCSI Enterprise target & Open iSCSI Initiator. They have their own How-To's & documentation which will help you.
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 4:24 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage..
Hi,
I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new.
Can You provide me with steps by steps docs.
One more thing - What Minimum Size of Hard Disk we need for That?
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 3:40 PM
To: linux clustering
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume.
And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Prakash.P at lsi.com Wed Jul 9 11:39:05 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Wed, 9 Jul 2008 19:39:05 +0800
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17934@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7AC2@hkgmail01.lsi.com>
You should create a directory in Spare server & export that directory as NFS Share. On the cluster nodes there should be an option to create NFS resource. This will mount the Shared directory in your cluster node. So you are going to use that exported directory as Share. Then if you wish you can copy the required scripts into that directory & run the scripts from there hence it will provide you the flexibility of failover & failback.
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 4:27 PM
To: linux clustering
Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage..
Just for Information, How Will we configure NAS Concept you said earlier.
What Should I share actually?
Are you Talking about the Script?
________________________________
From: Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 4:24 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage..
Hi,
I would like to go for iSCSI Configuration.That sound good.Atleast I will learn something new.
Can You provide me with steps by steps docs.
One more thing - What Minimum Size of Hard Disk we need for That?
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 3:40 PM
To: linux clustering
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
Yes you can do with iSCSI. You need to install iSCSI software target on the spare RHEL machine & configure the disk space as virtual SCSI volume.
And on the both machines of the two node cluster you need to install iSCSI initiators establish iSCSI session with your target server & u can see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS, simply create a NFS share using the spare RHEL machine & export it to both the nodes of cluster. On Cluster nodes create some NFS resources for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna help me out.Since This was small two Node Cluster Setup.I have few script which I am running on one primary server and on disabling Ethernet on one , the other is taking responsibility to start the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From singh.rajeshwar at gmail.com Wed Jul 9 12:16:22 2008
From: singh.rajeshwar at gmail.com (Rajeshwar Singh)
Date: Wed, 9 Jul 2008 17:46:22 +0530
Subject: [Linux-cluster] Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
Message-ID:
Hi,
You you can use freeNAS to emulate an intel/amd machine as NAS and do all
the testing (iscsi and nfs and cifs) of protocols.
regards
2008/7/9 Singh Raina, Ajeet :
> Hello Guys,
>
>
>
> Just Now I have been successful in configuring the two Node Fail-over
> Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know you
> people gonna help me out.Since This was small two Node Cluster Setup.I have
> few script which I am running on one primary server and on disabling
> Ethernet on one , the other is taking responsibility to start the same
> service plus rebooting the disabled system
>
> That is working fine.
>
>
>
> Now Let me tell you I don't have Shared Storage.Is there any alternative
> for that.
>
> Somewhere I read about iSCSI but donnno whether it will be helpful.
>
>
>
> I have one RHEL System of 40 GB. Can I make it Shared Storage.
>
> Its Just a matter of Testing a script.
>
> Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
>
> This e-mail and any attachment is for authorised use by the intended
> recipient(s) only. It may contain proprietary material, confidential
> information and/or be subject to legal privilege. It should not be copied,
> disclosed to, retained or used by, any other party. If you are not an
> intended recipient then please promptly delete this e-mail and any
> attachment and all copies and inform the sender. Thank you.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mm at yuhu.biz Wed Jul 9 12:30:04 2008
From: mm at yuhu.biz (Marian Marinov)
Date: Wed, 9 Jul 2008 15:30:04 +0300
Subject: [Linux-cluster] Alternative to Shared Storage..
In-Reply-To:
References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
Message-ID: <200807091530.04510.mm@yuhu.biz>
You can also create your own shared storage using glusterfs.
This way the only thing you will need is a FUSE support in your kernels.
Without touching anything else on the system.
regards
Marian Marinov
On Wednesday 09 July 2008 15:16:22 Rajeshwar Singh wrote:
> Hi,
> You you can use freeNAS to emulate an intel/amd machine as NAS and do all
> the testing (iscsi and nfs and cifs) of protocols.
>
> regards
>
> 2008/7/9 Singh Raina, Ajeet :
> > Hello Guys,
> >
> >
> >
> > Just Now I have been successful in configuring the two Node Fail-over
> > Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know
> > you people gonna help me out.Since This was small two Node Cluster
> > Setup.I have few script which I am running on one primary server and on
> > disabling Ethernet on one , the other is taking responsibility to start
> > the same service plus rebooting the disabled system
> >
> > That is working fine.
> >
> >
> >
> > Now Let me tell you I don't have Shared Storage.Is there any alternative
> > for that.
> >
> > Somewhere I read about iSCSI but donnno whether it will be helpful.
> >
> >
> >
> > I have one RHEL System of 40 GB. Can I make it Shared Storage.
> >
> > Its Just a matter of Testing a script.
> >
> > Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
> >
> > This e-mail and any attachment is for authorised use by the intended
> > recipient(s) only. It may contain proprietary material, confidential
> > information and/or be subject to legal privilege. It should not be
> > copied, disclosed to, retained or used by, any other party. If you are
> > not an intended recipient then please promptly delete this e-mail and any
> > attachment and all copies and inform the sender. Thank you.
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
From ajeet.singh.raina at logica.com Wed Jul 9 13:22:07 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Wed, 9 Jul 2008 18:52:07 +0530
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7AC2@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17936@in-ex004.groupinfra.com>
Hi,
I attempted doing NFS setup.What I did is I have one more RHEL Machine
where I ran the following command:
# vi /etc/exports
/datashare *(rw,sync,no_root_squash)
#service portmap restart
#service nfs restart
[root at pe ~]# exportfs
/datashare
Is it fine?
Now, I went to the two nodes and tried :
[root at 1014236106 ~]# showmount -e 10.14.236.169
Export list for 10.14.236.169:
/datashare *
[root at 1014236106 ~]#
The Same Shared Being shown by the second Cluster Node.
Now I opened :
#system-config-cluster > Went to Cluster Configuration > Add New
Resource. Now I am confused.
There are three Options:
1. NFS Mount
2. NFS Exports
3. NFS Client.
When I attempted doing NFS Exports , It just says NAME OF EXPORT
CONFIGURATION...Whats That Now? Is it same as entry as /datashare.Or
Otherwise Else?
I need to Choose NFS Mount or Exports or NFS Client.
Let me tell you the condition again.I have two Cluster Nodes and Am
using NFS as Alternative Shared Storage.
Pls Help.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 5:09 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage..
You should create a directory in Spare server & export that directory as
NFS Share. On the cluster nodes there should be an option to create NFS
resource. This will mount the Shared directory in your cluster node. So
you are going to use that exported directory as Share. Then if you wish
you can copy the required scripts into that directory & run the scripts
from there hence it will provide you the flexibility of failover &
failback.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Wednesday, July 09, 2008 4:27 PM
To: linux clustering
Subject: FW: [Linux-cluster] RE: Alternative to Shared Storage..
Just for Information, How Will we configure NAS Concept you said
earlier.
What Should I share actually?
Are you Talking about the Script?
________________________________
From: Singh Raina, Ajeet
Sent: Wednesday, July 09, 2008 4:24 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: Alternative to Shared Storage..
Hi,
I would like to go for iSCSI Configuration.That sound good.Atleast I
will learn something new.
Can You provide me with steps by steps docs.
One more thing - What Minimum Size of Hard Disk we need for That?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Wednesday, July 09, 2008 3:40 PM
To: linux clustering
Subject: [Linux-cluster] RE: Alternative to Shared Storage..
Yes you can do with iSCSI. You need to install iSCSI software target on
the spare RHEL machine & configure the disk space as virtual SCSI
volume.
And on the both machines of the two node cluster you need to install
iSCSI initiators establish iSCSI session with your target server & u can
see the volume on both these servers.
If you are new to iSCSI & feel it takes more time. You can go for NAS,
simply create a NFS share using the spare RHEL machine & export it to
both the nodes of cluster. On Cluster nodes create some NFS resources
for mounting the share automatically.
Regards,
Prakash
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Wednesday, July 09, 2008 3:27 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Alternative to Shared Storage..
Hello Guys,
Just Now I have been successful in configuring the two Node Fail-over
Cluster. It was tested on RHEL 4.0 U2.Now I have few queries and I know
you people gonna help me out.Since This was small two Node Cluster
Setup.I have few script which I am running on one primary server and on
disabling Ethernet on one , the other is taking responsibility to start
the same service plus rebooting the disabled system
That is working fine.
Now Let me tell you I don't have Shared Storage.Is there any alternative
for that.
Somewhere I read about iSCSI but donnno whether it will be helpful.
I have one RHEL System of 40 GB. Can I make it Shared Storage.
Its Just a matter of Testing a script.
Do Let me Know how gonna it be possible.Or Any Doc Which Talk about
that?
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bfilipek at crscold.com Wed Jul 9 13:51:57 2008
From: bfilipek at crscold.com (Brad Filipek)
Date: Wed, 9 Jul 2008 08:51:57 -0500
Subject: [Linux-cluster] Basic 2 node NFS cluster setup help
Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com>
I am a little unsure on how to properly setup an NFS export on my 2 node cluster. I have 1 service in cluster manager called "cluster" and 4 resources:
1) Virtual IP of 172.25.7.10 (which binds to eth0)
2) Virtual IP of 172.25.8.10 (which binds to eth1)
3) ext3 file system mount at /SAN/LogVol2 called "data"
4) ext3 file system mount at /SAN/LogVol3 called "shared"
When I start the cluster services using just these 4 resources assiged to my one service called "cluster", everything starts up and works fine.
What I need to do is assign 3 NFS exports:
/SAN/LogVol3/files webserver(ro,sync)
/SAN/LogVol3/webup webserver(rw,sync)
/SAN/LogVol2/webdown webserver(ro,sync)
Do I need to create 3 new "NFS Export" resources for these? When I select the "NFS Export" option within cluster suite, I only have one field to fill in - Name. It does not let me select the path that I want to export and which options to allow such as the host, ro or rw, etc. I am just trying to make the above exports available on my cluster's virtual IP of 172.25.7.10 instead of setting it up on each of the two nodes and manually starting the NFS service on whichever node is active in the cluster. Do I still need to create an /etc/exports file with all 3 of these entries on each node? Or is there a config file somewhere else? I read the NFS cookbook but it explains how to setup NFS using multiple services (I only have one service) with active/active GFS (I am using EXT3 in active/passive).
Thanks in advance for any help.
Brad
Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited.
If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmacfarland at nexatech.com Wed Jul 9 15:28:37 2008
From: jmacfarland at nexatech.com (Jeff Macfarland)
Date: Wed, 09 Jul 2008 10:28:37 -0500
Subject: [Linux-cluster] Alternative to Shared Storage..
In-Reply-To: <48748E3C.5060002@redhat.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17930@in-ex004.groupinfra.com>
<48748E3C.5060002@redhat.com>
Message-ID: <4874D925.8010000@nexatech.com>
Do any of the software targets yet support scsi reservations? The one I
work with mostly (iet) unfortunately does not.
Bryn M. Reeves wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Singh Raina, Ajeet wrote:
>> Hello Guys,
>>
>>
>>
>> Just Now I have been successful in configuring the two Node Fail-over Cluster.
>> It was tested on RHEL 4.0 U2.Now I have few queries and I know you people gonna
>
> You probably want to evaluate something a little newer - RHEL-4.2 was
> released some time ago and there have been significant fixes and feature
> enhancements in the releases since that time.
>
>> Now Let me tell you I don?t have Shared Storage.Is there any alternative for that.
>>
>> Somewhere I read about iSCSI but donnno whether it will be helpful.
>
> I use software-based iSCSI on pretty much all my test systems - it works
> great. You need the iSCSI initiator package installed on the systems
> that will import the devices and an iSCSI target installed on the host
> that exports the storage. There are several target projects out there in
> varying states of completeness and functionality. I've used iet (iSCSI
> enterprise target) on RHEL4 and there is now also stgt (scsi target
> utils) which is included in the Cluster Storage channel for RHEL5.
>
>> Do Let me Know how gonna it be possible.Or Any Doc Which Talk about that?
>
> http://stgt.berlios.de/
> http://iscsitarget.sourceforge.net/
>
> RHEL5 also supports installing to and booting from software iSCSI targets.
>
> Regards,
> Bryn.
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)
> Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org
>
> iEYEARECAAYFAkh0jjwACgkQ6YSQoMYUY94AnACgnmUhUZ1vB8lqH2je14KdJEu5
> p/IAoNfzvAiW1YGPFwahk5PAcXfVYzu/
> =ZHpD
> -----END PGP SIGNATURE-----
--
Jeff Macfarland (jmacfarland at nexatech.com)
Nexa Technologies - 972.747.8879
Systems Administrator
GPG Key ID: 0x5F1CA61B
GPG Key Server: hkp://wwwkeys.pgp.net
From bfields at fieldses.org Wed Jul 9 15:40:04 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Wed, 9 Jul 2008 11:40:04 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <48747BF6.2060001@redhat.com>
References: <20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com>
Message-ID: <20080709154004.GC5780@fieldses.org>
On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
> Steven Whitehouse wrote:
>> Hi,
>>
>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>>
>>>>>
>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>>> getting some dlm reply it expects?
>>>>> dlm inter-node communication is not working here for some reason. There
>>>>> must be something unusual with the way the network is configured on the
>>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>>> network config to the dlm.
>>>>>
>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>>> when a network interface has multiple IP addresses, and/or routing is
>>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>>> what to look for.
>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>>> On the kvm host there are 4 virtual interfaces bridged together:
>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>>> the second machine opened a tcp connection to port 21064 on the first
>>> (which had already completed the mount), and sent it a single message
>>> identified by wireshark as "DLM3" protocol, type recovery command:
>>> status command. It got back an ACK then a RST.
>>>
>>> Then the same happened in the other direction, with the first machine
>>> sending a similar message to port 21064 on the second, which then reset
>>> the connection.
>>>
>
> That's a symptom of the "connect from non-cluster node" error in the
> DLM.
I think I am getting a message to that affect in my logs.
> It's got a connection from an IP address that is not known to cman.
> So it closes it as a spoofer
OK. Is there an easy way to see the list of ip addresses known to cman?
> You'll need to check the routing of the interfaces. The most common
> cause of this sort of error is having two interfaces on the same
> physical (or internal) network.
Thanks, that's helpful.
--b.
From bfields at fieldses.org Wed Jul 9 15:29:46 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Wed, 9 Jul 2008 11:29:46 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <1215593064.3411.6.camel@localhost.localdomain>
References: <20080626203315.GB13293@fieldses.org>
<20080626211052.GC13293@fieldses.org>
<20080627171845.GD19105@redhat.com>
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
Message-ID: <20080709152946.GB5780@fieldses.org>
On Wed, Jul 09, 2008 at 09:44:24AM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
> > On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> > > On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> > > > On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > > > > - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > > > > + write(control_fd, in, sizeof(struct dlm_plock_info));
> > > >
> > > > Gah, sorry, I keep fixing that and it keeps reappearing.
> > > >
> > > >
> > > > > Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> > > >
> > > > > It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> > > > > in "D" state in dlm_rcom_status(), so I guess the second node isn't
> > > > > getting some dlm reply it expects?
> > > >
> > > > dlm inter-node communication is not working here for some reason. There
> > > > must be something unusual with the way the network is configured on the
> > > > nodes, and/or a problem with the way the cluster code is applying the
> > > > network config to the dlm.
> > > >
> > > > Ah, I just remembered what this sounds like; we see this kind of thing
> > > > when a network interface has multiple IP addresses, and/or routing is
> > > > configured strangely. Others cc'ed could offer better details on exactly
> > > > what to look for.
> > >
> > > OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
> > > neither, and it's entirely likely there's some obvious misconfiguration.
> > > On the kvm host there are 4 virtual interfaces bridged together:
> >
> > I ran wireshark on vnet0 while doing the second mount; what I saw was
> > the second machine opened a tcp connection to port 21064 on the first
> > (which had already completed the mount), and sent it a single message
> > identified by wireshark as "DLM3" protocol, type recovery command:
> > status command. It got back an ACK then a RST.
> >
> > Then the same happened in the other direction, with the first machine
> > sending a similar message to port 21064 on the second, which then reset
> > the connection.
> >
> > --b.
> >
> An ACK & RST for the same packet? Or was than an ACK SYN for the SYN and
> then an RST for the following data packet? Could you post the trace or
> put it somewhere we can see it?
Sure, thanks. It's at
http://www.fieldses.org/~bfields/failed-dlm.pcap
http://www.fieldses.org/~bfields/failed-dlm-filtered.pcap
(The second is just the dlm traffic, with all the ais, ssh, dns, etc.
filtered out.)
--b.
From ccaulfie at redhat.com Wed Jul 9 15:50:14 2008
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Wed, 09 Jul 2008 16:50:14 +0100
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080709154004.GC5780@fieldses.org>
References: <20080626211052.GC13293@fieldses.org> <20080627171845.GD19105@redhat.com> <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com>
<20080709154004.GC5780@fieldses.org>
Message-ID: <4874DE36.6030704@redhat.com>
J. Bruce Fields wrote:
> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
>> Steven Whitehouse wrote:
>>> Hi,
>>>
>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>>>
>>>>>>
>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>>>> getting some dlm reply it expects?
>>>>>> dlm inter-node communication is not working here for some reason. There
>>>>>> must be something unusual with the way the network is configured on the
>>>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>>>> network config to the dlm.
>>>>>>
>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>>>> when a network interface has multiple IP addresses, and/or routing is
>>>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>>>> what to look for.
>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>>>> On the kvm host there are 4 virtual interfaces bridged together:
>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>>>> the second machine opened a tcp connection to port 21064 on the first
>>>> (which had already completed the mount), and sent it a single message
>>>> identified by wireshark as "DLM3" protocol, type recovery command:
>>>> status command. It got back an ACK then a RST.
>>>>
>>>> Then the same happened in the other direction, with the first machine
>>>> sending a similar message to port 21064 on the second, which then reset
>>>> the connection.
>>>>
>> That's a symptom of the "connect from non-cluster node" error in the
>> DLM.
>
> I think I am getting a message to that affect in my logs.
>
>> It's got a connection from an IP address that is not known to cman.
>> So it closes it as a spoofer
>
> OK. Is there an easy way to see the list of ip addresses known to cman?
yes,
cman_tool nodes -a
will show you all the nodes and their known IP addresses
--
Chrissie
From jerlyon at gmail.com Wed Jul 9 16:04:39 2008
From: jerlyon at gmail.com (Jeremy Lyon)
Date: Wed, 9 Jul 2008 10:04:39 -0600
Subject: [Linux-cluster] clustat requires root
Message-ID: <779919740807090904u77b0b602q8eca5409665ca018@mail.gmail.com>
Hi,
I just noticed that in RHEL 4 clustat could be run by any user and now in
RHEL 5 it requires root. Was this done on purpose or is it a by product of
the changes of cluster from v1 -> v2? Is there anything that can be done to
allow a user to run clustat without sudo. I don't think I want to set it
with the suid bit.
RHEL4:
rhel4:/u/oracle> /usr/sbin/clustat
Member Status: Quorate
Member Name Status
------ ---- ------
rhel4-2 Online, rgmanager
rhel4 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
griddnvr rhel4 started
fibrbase rhel4 started
pcms2 rhel4 started
notifprd rhel4 started
qtprod (none) disabled
rhel4:/u/oracle> /usr/sbin/clustat -v
clustat version 1.9.72
Connected via: CMAN/SM Plugin v1.1.7.4
rhel4:/u/oracle> rpm -q rgmanager
rgmanager-1.9.72-1
RHEL5:
rhel5 /u/oracle> /usr/sbin/clustat
Could not connect to CMAN: Permission denied
rhel5 /u/oracle> /usr/sbin/clustat -v
Could not connect to CMAN: Permission denied
rhel5 /u/oracle> rpm -q rgmanager
rgmanager-2.0.38-2.el5_2.1
[root at rhel5 ~]# clustat -v
clustat version DEVEL
TIA
-Jeremy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bfields at fieldses.org Wed Jul 9 16:32:22 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Wed, 9 Jul 2008 12:32:22 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <4874DE36.6030704@redhat.com>
References:
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com>
Message-ID: <20080709163222.GF5780@fieldses.org>
On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
> J. Bruce Fields wrote:
>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
>>> Steven Whitehouse wrote:
>>>> Hi,
>>>>
>>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>>>>
>>>>>>>
>>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>>>>> getting some dlm reply it expects?
>>>>>>> dlm inter-node communication is not working here for some reason. There
>>>>>>> must be something unusual with the way the network is configured on the
>>>>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>>>>> network config to the dlm.
>>>>>>>
>>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>>>>> when a network interface has multiple IP addresses, and/or routing is
>>>>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>>>>> what to look for.
>>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>>>>> On the kvm host there are 4 virtual interfaces bridged together:
>>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>>>>> the second machine opened a tcp connection to port 21064 on the first
>>>>> (which had already completed the mount), and sent it a single message
>>>>> identified by wireshark as "DLM3" protocol, type recovery command:
>>>>> status command. It got back an ACK then a RST.
>>>>>
>>>>> Then the same happened in the other direction, with the first machine
>>>>> sending a similar message to port 21064 on the second, which then reset
>>>>> the connection.
>>>>>
>>> That's a symptom of the "connect from non-cluster node" error in the
>>> DLM.
>>
>> I think I am getting a message to that affect in my logs.
>>
>>> It's got a connection from an IP address that is not known to cman.
>>> So it closes it as a spoofer
>>
>> OK. Is there an easy way to see the list of ip addresses known to cman?
>
> yes,
>
> cman_tool nodes -a
>
> will show you all the nodes and their known IP addresses
piglet2:~# cman_tool nodes -a
Node Sts Inc Joined Name
1 M 376 2008-07-09 12:30:32 piglet1
Addresses: 192.168.122.129
2 M 368 2008-07-09 12:30:31 piglet2
Addresses: 192.168.122.130
3 M 380 2008-07-09 12:30:33 piglet3
Addresses: 192.168.122.131
4 M 372 2008-07-09 12:30:31 piglet4
Addresses: 192.168.122.132
These addresses are correct (and are the same addresses that show up in the
packet trace).
I must be overlooking something very obvious....
--b.
From wcarty at gmail.com Wed Jul 9 17:54:52 2008
From: wcarty at gmail.com (Wayne Carty)
Date: Wed, 9 Jul 2008 13:54:52 -0400
Subject: [Linux-cluster] Freezing GFS mount in a cluster
In-Reply-To: <004a01c8e0f5$b6d8ccd0$248a6670$@net>
References: <004a01c8e0f5$b6d8ccd0$248a6670$@net>
Message-ID:
I'm currently using the same ISCSI san with a 2 node cluster and not having
any issues. I'm currently running Centos 4.6. Are you using conga to manage
your cluster? what does clustat show when you run it before mounting you
gfs filesystem. I'm also using manual fencing and so far I'm not having a
problem. Here is a look at my config. I'm not using it to run any services
or to mount the filesystems. It's just basic.
~
On Tue, Jul 8, 2008 at 8:25 AM, Kees Hoekzema wrote:
> Hello List,
>
> Recently we bought an Dell MD3000 iSCSI storage system and we are trying to
> get GFS running on it. I have 3 test servers hooked up to the MD3000i and I
> have the cluster working, including multipath and different paths.
>
> When I had the cluster up with all 3 nodes in the fence domain and
> cman_tool
> status reporting 3 nodes I made a GFS partition and formatted it:
> # gfs_mkfs -j 10 -p lock_dlm -t tweakers:webdata /dev/mapper/webdata-part1
>
> This worked and I could mount the filesystem on the server I made it on.
> However, as soon as I tried to mount it on one of the two other servers, I
> would get a freeze and get fenced. After a fresh reboot of the complete
> cluster I tried to mount it again. The first server could mount it, but any
> server that would try to mount it with the first server having the gfs
> mounted would crash.
>
> As I'm fairly new to cman/fencing/gfs-clusters, I was wondering if this is
> something 'silly' configuration error, or that there is something seriously
> wrong.
>
> Another thing I would like to know is where to get debug information. Right
> now there is not a lot debug information available, or at least I couldn't
> find it. One thing that particularly annoyed me was the ' Waiting for
> fenced
> to join the fence group.' message which didn't come with any explanation
> whatsoever. That message finally went away when I powered up the two other
> servers and started the cluster on all three simultaneously.
>
> Anyway, my cluster config for this testing. I use manual fencing for
> testing as the environment I test it in does not have exactly the same
> hardware as I have in the production environment.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Conclusion:
> - why can't I mount GFS on another server, when it is mounted on one?
> - how do I get more debug information (ie: reason why a server can't join a
> fence domein. Or the reason why a server gets fenced).
>
> Thank you all for your time,
>
> Kees Hoekzema
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
Wayne Carty
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 04:26:04 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 09:56:04 +0530
Subject: [Linux-cluster] Knowing Cluster Version..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7AC0@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17937@in-ex004.groupinfra.com>
I have two RHEL 4.0 Update 2 Servers which I have installed with the
following packages each :
ccs-1.0.6-0.x86_64.rpm
cman-1.0.8-0.x86_64.rpm
cman-kernel-smp-2.6.9-39.5.x86_64.rpm
cman-kernel-smp-2.6.9-44.7.x86_64.rpm
device-mapper-1.02.25-1.el4.x86_64.rpm
dlm-1.0.1-1.x86_64.rpm
dlm-kernel-smp-2.6.9-37.7.x86_64.rpm
dlm-kernel-smp-2.6.9-39.1.x86_64.rpm
dlm-kernel-smp-2.6.9-42.7.x86_64.rpm
dlm-kernel-smp-2.6.9-46.16.0.8.x86_64.rpm
lib64cluster1-1.03.00-2mdv2008.0.x86_64.rpm
lvm2-cluster-2.01.09-5.0.RHEL4.x86_64.rpm
lvm2-cluster-2.01.14-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.01-1.2.RHEL4.x86_64.rpm
lvm2-cluster-2.02.06-1.0.RHEL4.x86_64.rpm
lvm2-cluster-2.02.21-7.el4.x86_64.rpm
lvm2-cluster-2.02.27-2.el4_6.2.x86_64.rpm
magma-1.0.5-0.x86_64.rpm
magma-plugins-1.0.8-0.x86_64.rpm
rgmanager-1.9.50-0.x86_64.rpm
system-config-cluster-1.0.27-1.0.noarch.rpm
system-config-cluster-1[1].0.27-1.0.noarch.rpm
perl-Crypt-SSLeay-0.51-5.x86_64.rpm
What will be my Cluster Version?How to Check That?
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nerix at free.fr Thu Jul 10 08:25:02 2008
From: nerix at free.fr (eric)
Date: Thu, 10 Jul 2008 10:25:02 +0200
Subject: [Linux-cluster] two node cluster update
Message-ID: <4875C75E.7000803@free.fr>
Hi list,
I'd like to know if there is a "best practice" for updating* a two-node
(active/passive) cluster with qdisk.
May I start with the passive node ? Can it become dangerous if the
passive node run differents packages from the active node ?
Here are my packages to update.
from | to
------------------------------------------------------------------------------------------------------
openais-0.80.3-7.el5 | openais 0.80.3-15.el5
cman-2.0.73-1.el5_1.1 | cman 2.0.84-2.el5
rgmanager-2.0.31-1.el5 | rgmanager 2.0.38-2.el5_2.1
ricci-0.10.0-6.el5 | ricci 0.12.0-7.el5.centos.3
modcluster-0.10.0-5.el5 | modcluster 0.12.0-7.el5.centos
Thanks.
Eric.
*updating from CentOS5.0 to CentOS5.2 (yum update).
From ajeet.singh.raina at logica.com Thu Jul 10 08:52:03 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 14:22:03 +0530
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com>
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at bl04mpdsk ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Prakash.P at lsi.com Thu Jul 10 09:07:58 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Thu, 10 Jul 2008 17:07:58 +0800
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B1793D@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com>
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage
[/doc]
My Hard Disk Partition says:
[code]
[root at bl04mpdsk ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
#/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry?
If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 09:11:50 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 14:41:50 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com>
Shall I need to mention Lun 0 ? is it needed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at bl04mpdsk ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Prakash.P at lsi.com Thu Jul 10 09:18:06 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Thu, 10 Jul 2008 17:18:06 +0800
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com>
References: <2B52F34989FB054FAF95019F74B992D50539BC7C66@hkgmail01.lsi.com>
<0139539A634FD04A99C9B8880AB70CB209B1793E@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C6E@hkgmail01.lsi.com>
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage
[/doc]
My Hard Disk Partition says:
[code]
[root at bl04mpdsk ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
#/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry?
If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccaulfie at redhat.com Thu Jul 10 09:26:54 2008
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Thu, 10 Jul 2008 10:26:54 +0100
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080709163222.GF5780@fieldses.org>
References: <20080627184117.GE19105@redhat.com> <20080706215105.GA28037@fieldses.org> <20080707154828.GB10404@redhat.com> <20080707184928.GE14291@fieldses.org> <20080708221533.GI15038@fieldses.org> <1215593064.3411.6.camel@localhost.localdomain> <48747BF6.2060001@redhat.com>
<20080709154004.GC5780@fieldses.org> <4874DE36.6030704@redhat.com>
<20080709163222.GF5780@fieldses.org>
Message-ID: <4875D5DE.7030601@redhat.com>
J. Bruce Fields wrote:
> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
>> J. Bruce Fields wrote:
>>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
>>>> Steven Whitehouse wrote:
>>>>> Hi,
>>>>>
>>>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>>>>>
>>>>>>>>
>>>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>>>>>> getting some dlm reply it expects?
>>>>>>>> dlm inter-node communication is not working here for some reason. There
>>>>>>>> must be something unusual with the way the network is configured on the
>>>>>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>>>>>> network config to the dlm.
>>>>>>>>
>>>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>>>>>> when a network interface has multiple IP addresses, and/or routing is
>>>>>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>>>>>> what to look for.
>>>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>>>>>> On the kvm host there are 4 virtual interfaces bridged together:
>>>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>>>>>> the second machine opened a tcp connection to port 21064 on the first
>>>>>> (which had already completed the mount), and sent it a single message
>>>>>> identified by wireshark as "DLM3" protocol, type recovery command:
>>>>>> status command. It got back an ACK then a RST.
>>>>>>
>>>>>> Then the same happened in the other direction, with the first machine
>>>>>> sending a similar message to port 21064 on the second, which then reset
>>>>>> the connection.
>>>>>>
>>>> That's a symptom of the "connect from non-cluster node" error in the
>>>> DLM.
>>> I think I am getting a message to that affect in my logs.
>>>
>>>> It's got a connection from an IP address that is not known to cman.
>>>> So it closes it as a spoofer
>>> OK. Is there an easy way to see the list of ip addresses known to cman?
>> yes,
>>
>> cman_tool nodes -a
>>
>> will show you all the nodes and their known IP addresses
>
> piglet2:~# cman_tool nodes -a
> Node Sts Inc Joined Name
> 1 M 376 2008-07-09 12:30:32 piglet1
> Addresses: 192.168.122.129
> 2 M 368 2008-07-09 12:30:31 piglet2
> Addresses: 192.168.122.130
> 3 M 380 2008-07-09 12:30:33 piglet3
> Addresses: 192.168.122.131
> 4 M 372 2008-07-09 12:30:31 piglet4
> Addresses: 192.168.122.132
>
> These addresses are correct (and are the same addresses that show up in the
> packet trace).
>
> I must be overlooking something very obvious....
Hmm, very odd.
Are those IP addresses consistent across all nodes in the cluster ?
--
Chrissie
From ajeet.singh.raina at logica.com Thu Jul 10 09:26:40 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 14:56:40 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C6E@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com>
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at bl04mpdsk ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 10:00:02 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 15:30:02 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com>
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Prakash.P at lsi.com Thu Jul 10 10:09:06 2008
From: Prakash.P at lsi.com (P, Prakash)
Date: Thu, 10 Jul 2008 18:09:06 +0800
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17940@in-ex004.groupinfra.com>
<0139539A634FD04A99C9B8880AB70CB209B17941@in-ex004.groupinfra.com>
Message-ID: <2B52F34989FB054FAF95019F74B992D50539BC7C8B@hkgmail01.lsi.com>
This is related to IET. Go through their mailing list to find the solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
Preparing... ########################################### [100%]
1:iscsitarget-kernel ########################################### [ 50%]
2:iscsitarget ########################################### [100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages. Configure the system with two drives sda and sdb or create two logical volumes(lvm). The first disk is for the OS and the second for the iSCSI storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
#/dev/dvd /mnt/dvd auto defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry?
If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 10:42:59 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 16:12:59 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <2B52F34989FB054FAF95019F74B992D50539BC7C8B@hkgmail01.lsi.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17942@in-ex004.groupinfra.com>
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 10:57:39 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 16:27:39 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17942@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17943@in-ex004.groupinfra.com>
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 11:03:22 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 16:33:22 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17943@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com>
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 11:23:18 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 16:53:18 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17946@in-ex004.groupinfra.com>
Few issue Still prevailing:
As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not
seeing any shared on both the cluster nodes.
I guess I missed few step:
The DOC at end says:
Voila! you should now have a new SCSI disc avaiable for use. Now you can
use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format
the partition (which is out of the scope of this howto).
Do I need to do fdisk? Pls Help.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 11:52:40 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 17:22:40 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17946@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17949@in-ex004.groupinfra.com>
I tried running this on Client Machine.
#tail -f /var/log/messages
It Says:
Jul 9 12:42:40 BL02DL385 kernel: sda : very big device. try to use READ
CAPACITY(16).
Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr
sectors (0 MB)
Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: drive cache: write
back
Jul 9 12:42:40 BL02DL385 kernel: Attached scsi disk sda at scsi1,
channel 0, id 0, lun 0
Jul 9 12:42:40 BL02DL385 scsi.agent[28387]: disk at
/devices/platform/host1/target1:0:0/1:0:0:0
Jul 9 12:44:31 BL02DL385 kernel: sda : very big device. try to use READ
CAPACITY(16).
Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr
sectors (0 MB)
Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: drive cache: write
back
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:53 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Few issue Still prevailing:
As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not
seeing any shared on both the cluster nodes.
I guess I missed few step:
The DOC at end says:
Voila! you should now have a new SCSI disc avaiable for use. Now you can
use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format
the partition (which is out of the scope of this howto).
Do I need to do fdisk? Pls Help.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Thu Jul 10 11:56:05 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Thu, 10 Jul 2008 17:26:05 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17949@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794A@in-ex004.groupinfra.com>
And While Running this on Client,it Says:
find /sys/devices/platform/host* -name "block*"
/sys/devices/platform/host1/target1:0:0/1:0:0:0/block
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 5:23 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I tried running this on Client Machine.
#tail -f /var/log/messages
It Says:
Jul 9 12:42:40 BL02DL385 kernel: sda : very big device. try to use READ
CAPACITY(16).
Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr
sectors (0 MB)
Jul 9 12:42:40 BL02DL385 kernel: SCSI device sda: drive cache: write
back
Jul 9 12:42:40 BL02DL385 kernel: Attached scsi disk sda at scsi1,
channel 0, id 0, lun 0
Jul 9 12:42:40 BL02DL385 scsi.agent[28387]: disk at
/devices/platform/host1/target1:0:0/1:0:0:0
Jul 9 12:44:31 BL02DL385 kernel: sda : very big device. try to use READ
CAPACITY(16).
Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: 0 512-byte hdwr
sectors (0 MB)
Jul 9 12:44:31 BL02DL385 kernel: SCSI device sda: drive cache: write
back
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:53 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Few issue Still prevailing:
As I guess, after Setting Up iSCSI Target and iSCSI Initiator, I am not
seeing any shared on both the cluster nodes.
I guess I missed few step:
The DOC at end says:
Voila! you should now have a new SCSI disc avaiable for use. Now you can
use fdisk to partition the disk (fdisk /dev/sdb) and use mkfs to format
the partition (which is out of the scope of this howto).
Do I need to do fdisk? Pls Help.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From swhiteho at redhat.com Thu Jul 10 13:27:14 2008
From: swhiteho at redhat.com (Steven Whitehouse)
Date: Thu, 10 Jul 2008 14:27:14 +0100
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080709163222.GF5780@fieldses.org>
References:
<20080627184117.GE19105@redhat.com>
<20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org>
Message-ID: <1215696434.4011.161.camel@quoit>
Hi,
On Wed, 2008-07-09 at 12:32 -0400, J. Bruce Fields wrote:
> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
> > J. Bruce Fields wrote:
> >> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
> >>> Steven Whitehouse wrote:
> >>>> Hi,
> >>>>
> >>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
> >>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> >>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> >>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> >>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
> >>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
> >>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
> >>>>>>>
> >>>>>>>
> >>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> >>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> >>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
> >>>>>>>> getting some dlm reply it expects?
> >>>>>>> dlm inter-node communication is not working here for some reason. There
> >>>>>>> must be something unusual with the way the network is configured on the
> >>>>>>> nodes, and/or a problem with the way the cluster code is applying the
> >>>>>>> network config to the dlm.
> >>>>>>>
> >>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
> >>>>>>> when a network interface has multiple IP addresses, and/or routing is
> >>>>>>> configured strangely. Others cc'ed could offer better details on exactly
> >>>>>>> what to look for.
> >>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
> >>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
> >>>>>> On the kvm host there are 4 virtual interfaces bridged together:
> >>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
> >>>>> the second machine opened a tcp connection to port 21064 on the first
> >>>>> (which had already completed the mount), and sent it a single message
> >>>>> identified by wireshark as "DLM3" protocol, type recovery command:
> >>>>> status command. It got back an ACK then a RST.
> >>>>>
> >>>>> Then the same happened in the other direction, with the first machine
> >>>>> sending a similar message to port 21064 on the second, which then reset
> >>>>> the connection.
> >>>>>
> >>> That's a symptom of the "connect from non-cluster node" error in the
> >>> DLM.
> >>
> >> I think I am getting a message to that affect in my logs.
> >>
> >>> It's got a connection from an IP address that is not known to cman.
> >>> So it closes it as a spoofer
> >>
> >> OK. Is there an easy way to see the list of ip addresses known to cman?
> >
> > yes,
> >
> > cman_tool nodes -a
> >
> > will show you all the nodes and their known IP addresses
>
> piglet2:~# cman_tool nodes -a
> Node Sts Inc Joined Name
> 1 M 376 2008-07-09 12:30:32 piglet1
> Addresses: 192.168.122.129
> 2 M 368 2008-07-09 12:30:31 piglet2
> Addresses: 192.168.122.130
> 3 M 380 2008-07-09 12:30:33 piglet3
> Addresses: 192.168.122.131
> 4 M 372 2008-07-09 12:30:31 piglet4
> Addresses: 192.168.122.132
>
> These addresses are correct (and are the same addresses that show up in the
> packet trace).
>
> I must be overlooking something very obvious....
>
> --b.
>
There is something v. odd in the packet trace you sent:
16:31:25.513487 00:16:3e:2a:e6:4b (oui Unknown) > 00:16:3e:16:4d:61 (oui
Unknown
), ethertype IPv4 (0x0800), length 74: 192.168.122.130.41170 >
192.168.122.129.2
1064: S 1424458172:1424458172(0) win 5840
here we have a packet from .130 (00:16:3e:2a:e6:4b) to .129
(00:16:3e:16:4d:61) but next we see:
16:31:25.513880 00:ff:1d:e9:b9:a3 (oui Unknown) > 00:16:3e:2a:e6:4b (oui
Unknown
), ethertype IPv4 (0x0800), length 74: 192.168.122.129.21064 >
192.168.122.130.4
1170: S 1340956343:1340956343(0) ack 1424458173 win 5792
a packet thats supposedly from .129 except that its mac address is now
0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured
on two different nodes, either that or there is something odd going on
with bridging. If that still doesn't help you solve the problem, can you
do a:
/sbin/ip addr list
/sbin/ip route list
/sbin/ip neigh list
on each node and the "host" after an failed attempt so that we can try
and match up the mac addresses with the interfaces in the trace?
I don't think that we are too far away from a solution now,
Steve.
From pariviere at ippon.fr Thu Jul 10 14:02:39 2008
From: pariviere at ippon.fr (Pierre-Alain RIVIERE)
Date: Thu, 10 Jul 2008 16:02:39 +0200
Subject: [Linux-cluster] Recovery disaster
Message-ID: <1215698559.18002.74.camel@t61>
Hello everyone,
We're using Xen for about a year in my organization and I want to make
profit from summer to improve our infrastructure. First step : plan a
full recovery disaster procedure. It's not only related to Xen (only a
few actually) so I've allowed myself to post on both Xen and Linux
cluster lists.
My infrastructure is built as followed :
- One software SAN built with Openfiler (http://openfiler.com) : big
disks, RAID 5E, redundancy on power supply, network, cpu and RAM.
- N Xen Dom0 (actually 3)
- The same iSCSI volume is mounted on each Dom0 and we're using CLVM on
it. A PV equals a DomU disk.
It works pretty well and now I would like to rebuilt my SAN as quickly
as possible in case of problem (big hardware failure on the SAN).
Here how all these stuffs work together :
|---------Openfiler------| |-----?----Dom0---------|
PV -> VG -> LV -> iSCSI -> network -> PV -> VG -> LV->Xen VDB
PV : physical volume
VG : volume group
LV : logical volume
--------------------------------------------------------------
?- We use the LVM layer (DomO side) on top of another LVM layer (SAN
side) and ?performance are good until now. Do you know some caveats
about this usage? Is there's any reason for me to switch to a network
aware filesystem?
- Can I dd a snapshot of the iSCSI volume on the Openfiler box, send it
to a tape driver and expect a dd back to a identical LV to work?
- Same question if 2 or more LVs on the Openfiler box are aggregated
together with CLVM (and though iSCSI) on the Dom0 side.
Thanks
Regards
From ssingh at amnh.org Thu Jul 10 15:47:48 2008
From: ssingh at amnh.org (Sajesh Singh)
Date: Thu, 10 Jul 2008 11:47:48 -0400
Subject: [Linux-cluster] Change quorum disk
Message-ID: <48762F24.2000102@amnh.org>
Is it possible to change the quorum disk while the cluster is active. I
would like to change the device that qdiskd is using without having to
cycle the cluster. Is it possible to modify the cluster.conf on each
node with the new quorum disk and restart qdiskd so that the new device
is used?
Regards and TIA,
Sajesh Singh
From andrew at ntsg.umt.edu Thu Jul 10 16:54:41 2008
From: andrew at ntsg.umt.edu (Andrew A. Neuschwander)
Date: Thu, 10 Jul 2008 10:54:41 -0600 (MDT)
Subject: [Linux-cluster] Updated to 5.2, new gfs/locking messages
Message-ID: <51672.10.8.105.69.1215708881.squirrel@secure.ntsg.umt.edu>
I finally updated all my gfs cluster nodes to 5.2, when I updated the one
node that serves NFS, I started getting these in /var/log/messages:
gfs_controld[5079]: plock result write err 0 errno 2
kernel: lockd: grant for unknown block
kernel: gfs2 lock granted after lock request failed; dangling lock!
gfs_controld[5079]: plock result write err -1 errno 2
gfs_controld[5079]: plock result write err 0 errno 2
The "plock result write err" messages occur frequently. This is a centos
5.2 node serving nfs from a gfs filesystem. The nfs client that seems to
generate these errors is a fedora 9 nfs3 client, but that's just a guess.
I can't find much about these messages via google. How serious are these
messages?
Thanks,
-A
--
Andrew A. Neuschwander, RHCE
Linux Systems/Software Engineer
College of Forestry and Conservation
The University of Montana
http://www.ntsg.umt.edu
andrew at ntsg.umt.edu - 406.243.6310
From bfilipek at crscold.com Thu Jul 10 17:41:59 2008
From: bfilipek at crscold.com (Brad Filipek)
Date: Thu, 10 Jul 2008 12:41:59 -0500
Subject: [Linux-cluster] Basic 2 node NFS cluster setup help
References: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com>
Message-ID: <9C01E18EF3BC2448A3B1A4812EB87D024779@SRVEDI.upark.crscold.com>
Anybody running a 2 node NFS setup like this?
Brad
-----Original Message-----
From: linux-cluster-bounces at redhat.com on behalf of Brad Filipek
Sent: Wed 7/9/2008 8:51 AM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] Basic 2 node NFS cluster setup help
I am a little unsure on how to properly setup an NFS export on my 2 node cluster. I have 1 service in cluster manager called "cluster" and 4 resources:
1) Virtual IP of 172.25.7.10 (which binds to eth0)
2) Virtual IP of 172.25.8.10 (which binds to eth1)
3) ext3 file system mount at /SAN/LogVol2 called "data"
4) ext3 file system mount at /SAN/LogVol3 called "shared"
When I start the cluster services using just these 4 resources assiged to my one service called "cluster", everything starts up and works fine.
What I need to do is assign 3 NFS exports:
/SAN/LogVol3/files webserver(ro,sync)
/SAN/LogVol3/webup webserver(rw,sync)
/SAN/LogVol2/webdown webserver(ro,sync)
Do I need to create 3 new "NFS Export" resources for these? When I select the "NFS Export" option within cluster suite, I only have one field to fill in - Name. It does not let me select the path that I want to export and which options to allow such as the host, ro or rw, etc. I am just trying to make the above exports available on my cluster's virtual IP of 172.25.7.10 instead of setting it up on each of the two nodes and manually starting the NFS service on whichever node is active in the cluster. Do I still need to create an /etc/exports file with all 3 of these entries on each node? Or is there a config file somewhere else? I read the NFS cookbook but it explains how to setup NFS using multiple services (I only have one service) with active/active GFS (I am using EXT3 in active/passive).
Thanks in advance for any help.
Brad
Confidentiality Notice: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited.
If you have received this communication in error, please notify us immediately by email reply or by telephone and immediately delete this message and any attachments.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lhh at redhat.com Thu Jul 10 20:42:50 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Thu, 10 Jul 2008 16:42:50 -0400
Subject: [Linux-cluster] Basic 2 node NFS cluster setup help
In-Reply-To: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com>
References: <9C01E18EF3BC2448A3B1A4812EB87D024778@SRVEDI.upark.crscold.com>
Message-ID: <1215722570.22185.29.camel@localhost.localdomain>
On Wed, 2008-07-09 at 08:51 -0500, Brad Filipek wrote:
> I am a little unsure on how to properly setup an NFS export on my 2
> node cluster. I have 1 service in cluster manager called "cluster" and
> 4 resources:
>
> 1) Virtual IP of 172.25.7.10 (which binds to eth0)
> 2) Virtual IP of 172.25.8.10 (which binds to eth1)
> 3) ext3 file system mount at /SAN/LogVol2 called "data"
> 4) ext3 file system mount at /SAN/LogVol3 called "shared"
>
> When I start the cluster services using just these 4 resources assiged
> to my one service called "cluster", everything starts up and works
> fine.
>
> What I need to do is assign 3 NFS exports:
> /SAN/LogVol3/files webserver(ro,sync)
> /SAN/LogVol3/webup webserver(rw,sync)
> /SAN/LogVol2/webdown webserver(ro,sync)
>
> Do I need to create 3 new "NFS Export" resources for these? When I
> select the "NFS Export" option within cluster suite, I only have one
> field to fill in - Name. It does not let me select the path that I
> want to export and which options to allow such as the host, ro or rw,
> etc. I am just trying to make the above exports available on my
> cluster's virtual IP of 172.25.7.10 instead of setting it up on each
> of the two nodes and manually starting the NFS service on whichever
> node is active in the cluster. Do I still need to create
> an /etc/exports file with all 3 of these entries on each node? Or is
> there a config file somewhere else? I read the NFS cookbook but it
> explains how to setup NFS using multiple services (I only have one
> service) with active/active GFS (I am using EXT3 in active/passive).
Typically, you add an NFSexport (which is mostly a placeholder). Below
that, you attach nfsclients - which are actual hosts.
-- Lon
From ajeet.singh.raina at logica.com Fri Jul 11 06:14:41 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Fri, 11 Jul 2008 11:44:41 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com>
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Sayed.Mujtaba at in.unisys.com Fri Jul 11 06:55:34 2008
From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed)
Date: Fri, 11 Jul 2008 12:25:34 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17944@in-ex004.groupinfra.com>
<0139539A634FD04A99C9B8880AB70CB209B1794D@in-ex004.groupinfra.com>
Message-ID:
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Fri Jul 11 07:02:34 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Fri, 11 Jul 2008 12:32:34 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To:
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com>
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Sayed.Mujtaba at in.unisys.com Fri Jul 11 09:51:07 2008
From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed)
Date: Fri, 11 Jul 2008 15:21:07 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com>
References:
<0139539A634FD04A99C9B8880AB70CB209B1794E@in-ex004.groupinfra.com>
Message-ID:
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Fri Jul 11 10:13:56 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Fri, 11 Jul 2008 15:43:56 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To:
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com>
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Sayed.Mujtaba at in.unisys.com Fri Jul 11 10:24:39 2008
From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed)
Date: Fri, 11 Jul 2008 15:54:39 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com>
References:
<0139539A634FD04A99C9B8880AB70CB209B17953@in-ex004.groupinfra.com>
Message-ID:
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it
again to do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error
messages what you are getting)
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Fri Jul 11 10:31:11 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Fri, 11 Jul 2008 16:01:11 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To:
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com>
I rebooted all the machine and this time it seems to work.
But again getting stucked with something.
I can see :
# df -h
/dev/sda1 2.8G 37M 2.6G 2% /newshare
On both the machine.
But Whenever I am creating any file on one initiator it don't get
created on another.Why So?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:55 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it
again to do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error
messages what you are getting)
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Sayed.Mujtaba at in.unisys.com Fri Jul 11 10:37:54 2008
From: Sayed.Mujtaba at in.unisys.com (Mujtaba, Sayed Mohammed)
Date: Fri, 11 Jul 2008 16:07:54 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com>
References:
<0139539A634FD04A99C9B8880AB70CB209B17954@in-ex004.groupinfra.com>
Message-ID:
You are getting login to same iscsi server(ip address) using iscsi
commands so both are connected to same shared storage ...
Just mount from one node and create some files on it ...unmount from
that node and mount it from other node and see
if created files from first node are visible or no ...
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 4:01 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I rebooted all the machine and this time it seems to work.
But again getting stucked with something.
I can see :
# df -h
/dev/sda1 2.8G 37M 2.6G 2% /newshare
On both the machine.
But Whenever I am creating any file on one initiator it don't get
created on another.Why So?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:55 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it
again to do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error
messages what you are getting)
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Fri Jul 11 11:06:42 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Fri, 11 Jul 2008 16:36:42 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To:
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com>
Brilliant ...Its Worked.
I think GFS will enable us to see the files instantly on both the
Cluster Nodes.
Any Doc related to "Setting Up GFS"?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 4:08 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
You are getting login to same iscsi server(ip address) using iscsi
commands so both are connected to same shared storage ...
Just mount from one node and create some files on it ...unmount from
that node and mount it from other node and see
if created files from first node are visible or no ...
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 4:01 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I rebooted all the machine and this time it seems to work.
But again getting stucked with something.
I can see :
# df -h
/dev/sda1 2.8G 37M 2.6G 2% /newshare
On both the machine.
But Whenever I am creating any file on one initiator it don't get
created on another.Why So?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:55 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it
again to do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error
messages what you are getting)
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ..Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From pedroche5 at gmail.com Fri Jul 11 11:15:36 2008
From: pedroche5 at gmail.com (Pedro Gonzalez Zamora)
Date: Fri, 11 Jul 2008 13:15:36 +0200
Subject: [Linux-cluster] CMAN configuration
Message-ID: <47311dd20807110415h14c5eaf3o48ca67c7a3f2e44c@mail.gmail.com>
Hi all,
I trying to configure a cluster but I have some problems that I don't
understand.
Jul 11 12:34:31 dib1-s1 ccsd[6329]: Initial status:: Inquorate
Jul 11 12:34:31 dib1-s1 kernel: CMAN: sending membership request
Jul 11 12:34:31 dib1-s1 kernel: CMAN: Cluster membership rejected
Jul 11 12:34:31 dib1-s1 ccsd[6329]: Cluster manager shutdown. Attemping to
reconnect...
Jul 11 12:34:31 dib1-s1 kernel: CMAN: Waiting to join or form a
Linux-cluster
Jul 11 12:34:31 dib1-s1 cman: Timed-out waiting for cluster failed
Jul 11 12:34:32 dib1-s1 ccsd[6329]: Connected to cluster infrastruture via:
CMAN/SM Plugin v1.1.7.1
Jul 11 12:34:32 dib1-s1 ccsd[6329]: Initial status:: Inquorate
Jul 11 12:34:35 dib1-s1 kernel: CMAN: sending membership request
Jul 11 12:34:35 dib1-s1 kernel: CMAN: Cluster membership rejected
Jul 11 12:34:35 dib1-s1 ccsd[6329]: Cluster manager shutdown. Attemping to
reconnect...
Jul 11 12:35:03 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78840 seconds.
Jul 11 12:35:33 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78870 seconds.
Jul 11 12:36:03 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78900 seconds.
Jul 11 12:36:33 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78930 seconds.
Jul 11 12:37:03 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78960 seconds.
Jul 11 12:37:33 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 78990 seconds.
Jul 11 12:38:03 dib1-s1 ccsd[6329]: Unable to connect to cluster
infrastructure after 79020 seconds
Best Regards
Pedro
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From fog at t.is Fri Jul 11 14:27:02 2008
From: fog at t.is (=?iso-8859-1?Q?Finnur_=D6rn_Gu=F0mundsson_-_TM_Software?=)
Date: Fri, 11 Jul 2008 14:27:02 -0000
Subject: [Linux-cluster] Monitoring services with Nagios
Message-ID: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is>
Hi,
I was planning on monitoring the status of a service from clustat (run clustat, grab the output).
And as i am running a x86_64 system i can not seem to load the correct lib for snmpd to be able to read any data from it:
nmpd[30150]: dlopen failed: /usr/lib64/cluster-snmp/libClusterMonitorSnmp.so: undefined symbol: _ZN17ClusterMonitoring7Cluster15runningServicesEv
How do you monitor your cluster with Nagios/Other open source solutions ? (What scripts do you use etc).
K?r kve?ja / Best Regards,
Finnur ?rn Gu?mundsson
Network Engineer - Network Operations
fog at t.is
TM Software
Ur?arhvarf 6, IS-203 K?pavogur, Iceland
Tel: +354 545 3000 - fax +354 545 3610
www.tm-software.is
This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From stpierre at NebrWesleyan.edu Fri Jul 11 14:55:26 2008
From: stpierre at NebrWesleyan.edu (Chris St. Pierre)
Date: Fri, 11 Jul 2008 09:55:26 -0500 (CDT)
Subject: [Linux-cluster] Monitoring services with Nagios
In-Reply-To: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is>
References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is>
Message-ID:
I've attached my (very basic) check_rhcs script that I use with
Nagios. HTH.
Chris St. Pierre
Unix Systems Administrator
Nebraska Wesleyan University
On Fri, 11 Jul 2008, Finnur ?rn Gu?mundsson - TM Software wrote:
> Hi,
>
>
>
> I was planning on monitoring the status of a service from clustat (run clustat, grab the output).
>
> And as i am running a x86_64 system i can not seem to load the correct lib for snmpd to be able to read any data from it:
>
> nmpd[30150]: dlopen failed: /usr/lib64/cluster-snmp/libClusterMonitorSnmp.so: undefined symbol: _ZN17ClusterMonitoring7Cluster15runningServicesEv
>
>
>
> How do you monitor your cluster with Nagios/Other open source solutions ? (What scripts do you use etc).
>
>
>
> K?r kve?ja / Best Regards,
>
> Finnur ?rn Gu?mundsson
> Network Engineer - Network Operations
> fog at t.is
>
> TM Software
> Ur?arhvarf 6, IS-203 K?pavogur, Iceland
> Tel: +354 545 3000 - fax +354 545 3610
> www.tm-software.is
>
> This e-mail message and any attachments are confidential and may be privileged. TM Software e-mail disclaimer: www.tm-software.is/disclaimer
>
>
-------------- next part --------------
#! /usr/bin/perl -w
#
# $Id: check_rhcs 11710 2008-06-25 19:50:44Z stpierre $
#
# check_rhcs
#
# Nagios host script to check a Redhat Cluster Suite cluster
require 5.004;
use strict;
use lib qw(/usr/lib/nagios/plugins /usr/lib64/nagios/plugins /usr/local/nagios/libexec);
use utils qw($TIMEOUT %ERRORS &print_revision &support &usage);
use XML::Simple;
sub cleanup($$);
my $PROGNAME = "check_rhcs";
my $clustat = "/usr/sbin/clustat";
if (!-e $clustat) {
cleanup("UNKNOWN", "$clustat not found");
} elsif (!-x $clustat) {
cleanup("UNKNOWN", "$clustat not executable");
}
# Just in case of problems, let's not hang Nagios
$SIG{'ALRM'} = sub {
cleanup("UNKNOWN", "clustat timed out");
};
alarm($TIMEOUT);
my $output = `$clustat -x`;
my $retval = $?;
# Turn off alarm
alarm(0);
if ($output =~ /cman is not running/) {
cleanup("CRITICAL", $output);
} else {
my $status = XMLin($output, ForceArray => ['group']);
# check quorum
if (!$status->{'quorum'}->{'quorate'}) {
cleanup("CRITICAL", "Cluster is not quorate");
}
# check nodes
my %nodes = %{$status->{'nodes'}->{'node'}};
foreach my $node (keys(%nodes)) {
if (!$nodes{$node}->{'state'}) {
cleanup("WARNING", "Node $node is down");
} elsif (!$nodes{$node}->{'rgmanager'}) {
cleanup("WARNING", "rgmanager is not running on node $node");
}
}
# check services
my %svcs = %{$status->{'groups'}->{'group'}};
foreach my $svc (keys(%svcs)) {
if ($svcs{$svc}->{'state_str'} ne 'started') {
cleanup("CRITICAL", "$svc is in state " . $svcs{$svc}->{'state_str'});
}
}
# check return value
if ($retval) {
cleanup("UNKNOWN",
"Cluster appeared okay, but clustat returned $retval");
}
}
cleanup("OK", "Cluster is sound");
##############################
# Subroutines start here #
##############################
sub cleanup ($$) {
my ($state, $answer) = @_;
print "Cluster $state: $answer\n";
exit $ERRORS{$state};
}
From lhh at redhat.com Fri Jul 11 19:26:18 2008
From: lhh at redhat.com (Lon Hohberger)
Date: Fri, 11 Jul 2008 15:26:18 -0400
Subject: [Linux-cluster] Monitoring services with Nagios
In-Reply-To:
References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is>
Message-ID: <1215804378.27354.22.camel@localhost.localdomain>
On Fri, 2008-07-11 at 09:55 -0500, Chris St. Pierre wrote:
> I've attached my (very basic) check_rhcs script that I use with
> Nagios. HTH.
You should use clustat -fx
(f = fast / lockless)
-- Lon
From sghosh at redhat.com Fri Jul 11 20:35:05 2008
From: sghosh at redhat.com (Subhendu Ghosh)
Date: Fri, 11 Jul 2008 16:35:05 -0400
Subject: [Linux-cluster] Monitoring services with Nagios
In-Reply-To: <1215804378.27354.22.camel@localhost.localdomain>
References: <3DDA6E3E456E144DA3BB0A62A7F7F77902274384@SKYHQAMX08.klasi.is>
<1215804378.27354.22.camel@localhost.localdomain>
Message-ID: <4877C3F9.8060108@redhat.com>
Lon Hohberger wrote:
> On Fri, 2008-07-11 at 09:55 -0500, Chris St. Pierre wrote:
>> I've attached my (very basic) check_rhcs script that I use with
>> Nagios. HTH.
>
> You should use clustat -fx
>
> (f = fast / lockless)
>
> -- Lon
Is there any interest in submitting the script for the standard plugins
(GPLv3)? Happy to help get in :)
--
- regards
Subhendu Ghosh
From bfields at fieldses.org Fri Jul 11 22:35:39 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Fri, 11 Jul 2008 18:35:39 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <4875D5DE.7030601@redhat.com>
References: <20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org>
<4875D5DE.7030601@redhat.com>
Message-ID: <20080711223539.GG23069@fieldses.org>
On Thu, Jul 10, 2008 at 10:26:54AM +0100, Christine Caulfield wrote:
> J. Bruce Fields wrote:
>> On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
>>> J. Bruce Fields wrote:
>>>> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
>>>>> Steven Whitehouse wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
>>>>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
>>>>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
>>>>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
>>>>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
>>>>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
>>>>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
>>>>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
>>>>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
>>>>>>>>>> getting some dlm reply it expects?
>>>>>>>>> dlm inter-node communication is not working here for some reason. There
>>>>>>>>> must be something unusual with the way the network is configured on the
>>>>>>>>> nodes, and/or a problem with the way the cluster code is applying the
>>>>>>>>> network config to the dlm.
>>>>>>>>>
>>>>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
>>>>>>>>> when a network interface has multiple IP addresses, and/or routing is
>>>>>>>>> configured strangely. Others cc'ed could offer better details on exactly
>>>>>>>>> what to look for.
>>>>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
>>>>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
>>>>>>>> On the kvm host there are 4 virtual interfaces bridged together:
>>>>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
>>>>>>> the second machine opened a tcp connection to port 21064 on the first
>>>>>>> (which had already completed the mount), and sent it a single message
>>>>>>> identified by wireshark as "DLM3" protocol, type recovery command:
>>>>>>> status command. It got back an ACK then a RST.
>>>>>>>
>>>>>>> Then the same happened in the other direction, with the first machine
>>>>>>> sending a similar message to port 21064 on the second, which then reset
>>>>>>> the connection.
>>>>>>>
>>>>> That's a symptom of the "connect from non-cluster node" error in
>>>>> the DLM.
>>>> I think I am getting a message to that affect in my logs.
>>>>
>>>>> It's got a connection from an IP address that is not known to
>>>>> cman. So it closes it as a spoofer
>>>> OK. Is there an easy way to see the list of ip addresses known to cman?
>>> yes,
>>>
>>> cman_tool nodes -a
>>>
>>> will show you all the nodes and their known IP addresses
>>
>> piglet2:~# cman_tool nodes -a
>> Node Sts Inc Joined Name
>> 1 M 376 2008-07-09 12:30:32 piglet1
>> Addresses: 192.168.122.129 2 M 368 2008-07-09 12:30:31
>> piglet2
>> Addresses: 192.168.122.130 3 M 380 2008-07-09 12:30:33
>> piglet3
>> Addresses: 192.168.122.131 4 M 372 2008-07-09 12:30:31
>> piglet4
>> Addresses: 192.168.122.132
>>
>> These addresses are correct (and are the same addresses that show up in the
>> packet trace).
>>
>> I must be overlooking something very obvious....
>
> Hmm, very odd.
>
> Are those IP addresses consistent across all nodes in the cluster ?
Yes, "cman_tool nodes -a" gives the same IP addresses no matter which of
the four cluster nodes it's run on.
--b.
From bfields at fieldses.org Fri Jul 11 23:25:29 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Fri, 11 Jul 2008 19:25:29 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <1215696434.4011.161.camel@quoit>
References: <20080706215105.GA28037@fieldses.org>
<20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org>
<1215696434.4011.161.camel@quoit>
Message-ID: <20080711232529.GH23069@fieldses.org>
On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> Hi,
>
> On Wed, 2008-07-09 at 12:32 -0400, J. Bruce Fields wrote:
> > On Wed, Jul 09, 2008 at 04:50:14PM +0100, Christine Caulfield wrote:
> > > J. Bruce Fields wrote:
> > >> On Wed, Jul 09, 2008 at 09:51:02AM +0100, Christine Caulfield wrote:
> > >>> Steven Whitehouse wrote:
> > >>>> Hi,
> > >>>>
> > >>>> On Tue, 2008-07-08 at 18:15 -0400, J. Bruce Fields wrote:
> > >>>>> On Mon, Jul 07, 2008 at 02:49:28PM -0400, bfields wrote:
> > >>>>>> On Mon, Jul 07, 2008 at 10:48:28AM -0500, David Teigland wrote:
> > >>>>>>> On Sun, Jul 06, 2008 at 05:51:05PM -0400, J. Bruce Fields wrote:
> > >>>>>>>> - write(control_fd, in, sizeof(struct gdlm_plock_info));
> > >>>>>>>> + write(control_fd, in, sizeof(struct dlm_plock_info));
> > >>>>>>> Gah, sorry, I keep fixing that and it keeps reappearing.
> > >>>>>>>
> > >>>>>>>
> > >>>>>>>> Jul 1 14:06:42 piglet2 kernel: dlm: connect from non cluster node
> > >>>>>>>> It looks like dlm_new_workspace() is waiting on dlm_recoverd, which is
> > >>>>>>>> in "D" state in dlm_rcom_status(), so I guess the second node isn't
> > >>>>>>>> getting some dlm reply it expects?
> > >>>>>>> dlm inter-node communication is not working here for some reason. There
> > >>>>>>> must be something unusual with the way the network is configured on the
> > >>>>>>> nodes, and/or a problem with the way the cluster code is applying the
> > >>>>>>> network config to the dlm.
> > >>>>>>>
> > >>>>>>> Ah, I just remembered what this sounds like; we see this kind of thing
> > >>>>>>> when a network interface has multiple IP addresses, and/or routing is
> > >>>>>>> configured strangely. Others cc'ed could offer better details on exactly
> > >>>>>>> what to look for.
> > >>>>>> OK, thanks! I'm trying to run gfs2 on 4 kvm machines, I'm an expert on
> > >>>>>> neither, and it's entirely likely there's some obvious misconfiguration.
> > >>>>>> On the kvm host there are 4 virtual interfaces bridged together:
> > >>>>> I ran wireshark on vnet0 while doing the second mount; what I saw was
> > >>>>> the second machine opened a tcp connection to port 21064 on the first
> > >>>>> (which had already completed the mount), and sent it a single message
> > >>>>> identified by wireshark as "DLM3" protocol, type recovery command:
> > >>>>> status command. It got back an ACK then a RST.
> > >>>>>
> > >>>>> Then the same happened in the other direction, with the first machine
> > >>>>> sending a similar message to port 21064 on the second, which then reset
> > >>>>> the connection.
> > >>>>>
> > >>> That's a symptom of the "connect from non-cluster node" error in the
> > >>> DLM.
> > >>
> > >> I think I am getting a message to that affect in my logs.
> > >>
> > >>> It's got a connection from an IP address that is not known to cman.
> > >>> So it closes it as a spoofer
> > >>
> > >> OK. Is there an easy way to see the list of ip addresses known to cman?
> > >
> > > yes,
> > >
> > > cman_tool nodes -a
> > >
> > > will show you all the nodes and their known IP addresses
> >
> > piglet2:~# cman_tool nodes -a
> > Node Sts Inc Joined Name
> > 1 M 376 2008-07-09 12:30:32 piglet1
> > Addresses: 192.168.122.129
> > 2 M 368 2008-07-09 12:30:31 piglet2
> > Addresses: 192.168.122.130
> > 3 M 380 2008-07-09 12:30:33 piglet3
> > Addresses: 192.168.122.131
> > 4 M 372 2008-07-09 12:30:31 piglet4
> > Addresses: 192.168.122.132
> >
> > These addresses are correct (and are the same addresses that show up in the
> > packet trace).
> >
> > I must be overlooking something very obvious....
> >
> > --b.
> >
> There is something v. odd in the packet trace you sent:
>
> 16:31:25.513487 00:16:3e:2a:e6:4b (oui Unknown) > 00:16:3e:16:4d:61 (oui
> Unknown
> ), ethertype IPv4 (0x0800), length 74: 192.168.122.130.41170 >
> 192.168.122.129.2
> 1064: S 1424458172:1424458172(0) win 5840 140931 0,no
> p,wscale 4>
>
> here we have a packet from .130 (00:16:3e:2a:e6:4b) to .129
> (00:16:3e:16:4d:61) but next we see:
>
> 16:31:25.513880 00:ff:1d:e9:b9:a3 (oui Unknown) > 00:16:3e:2a:e6:4b (oui
> Unknown
> ), ethertype IPv4 (0x0800), length 74: 192.168.122.129.21064 >
> 192.168.122.130.4
> 1170: S 1340956343:1340956343(0) ack 1424458173 win 5792 1460,sackOK,timest
> amp 140842 140931,nop,wscale 4>
>
> a packet thats supposedly from .129 except that its mac address is now
> 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured
> on two different nodes, either that or there is something odd going on
> with bridging.
Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0
is the bridge, which has ip .1 on the host, and which is also the
interface that wireshark is being run on. The other two addresses are
the mac addresses of the (virtual) ethernet interfaces inside the two
kvm's, with ip's .129 and .130 respectively. So .130 is sending to the
expected mac address for .129, but responses from .130 are getting the
mac address of vnet0/vnet4.
I'm running wireshark on the host on vnet0. Just out of curiosity, I
ran it on the host on vnet1 instead, and this time saw the first DLM
connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll
experiment some more and look at the /sbin/ip output.
--b.
> If that still doesn't help you solve the problem, can you
> do a:
>
> /sbin/ip addr list
> /sbin/ip route list
> /sbin/ip neigh list
>
> on each node and the "host" after an failed attempt so that we can try
> and match up the mac addresses with the interfaces in the trace?
>
> I don't think that we are too far away from a solution now,
From bfields at fieldses.org Sat Jul 12 03:33:08 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Fri, 11 Jul 2008 23:33:08 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080711232529.GH23069@fieldses.org>
References: <20080707154828.GB10404@redhat.com>
<20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org>
<1215696434.4011.161.camel@quoit>
<20080711232529.GH23069@fieldses.org>
Message-ID: <20080712033308.GA29498@fieldses.org>
On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote:
> On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> > a packet thats supposedly from .129 except that its mac address is now
> > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured
> > on two different nodes, either that or there is something odd going on
> > with bridging.
>
> Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0
> is the bridge, which has ip .1 on the host, and which is also the
> interface that wireshark is being run on. The other two addresses are
> the mac addresses of the (virtual) ethernet interfaces inside the two
> kvm's, with ip's .129 and .130 respectively. So .130 is sending to the
> expected mac address for .129, but responses from .130 are getting the
> mac address of vnet0/vnet4.
>
> I'm running wireshark on the host on vnet0. Just out of curiosity, I
> ran it on the host on vnet1 instead, and this time saw the first DLM
> connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll
> experiment some more and look at the /sbin/ip output.
Bah, yes, I clearly got the network configuration completely screwed up
at some point--it must be trying to do some kind of NAT, though that
clearly makes no sense. I'll get this untangled and then I think it
should be OK....
--b.
From theophanis_kontogiannis at yahoo.gr Sun Jul 13 00:14:39 2008
From: theophanis_kontogiannis at yahoo.gr (Theophanis Kontogiannis)
Date: Sun, 13 Jul 2008 03:14:39 +0300
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com>
References:
<0139539A634FD04A99C9B8880AB70CB209B17957@in-ex004.groupinfra.com>
Message-ID: <00a801c8e47d$72d6e4f0$5884aed0$@gr>
Hello,
Yes for instant access from all nodes to the file, you need a cluster aware
file system like GFS (or GFS2 - still in experimental stage).
You can try the following links:
http://www.redhat.com/docs/manuals/csgfs/ (under GFS section)
http://gfs.wikidev.net/Main_Page
BR
Theophanis Kontogiannis
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Friday, July 11, 2008 2:07 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Brilliant .Its Worked.
I think GFS will enable us to see the files instantly on both the Cluster
Nodes.
Any Doc related to "Setting Up GFS"?
Pls Help
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 4:08 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
You are getting login to same iscsi server(ip address) using iscsi commands
so both are connected to same shared storage .
Just mount from one node and create some files on it .unmount from that
node and mount it from other node and see
if created files from first node are visible or no .
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Friday, July 11, 2008 4:01 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I rebooted all the machine and this time it seems to work.
But again getting stucked with something.
I can see :
# df -h
/dev/sda1 2.8G 37M 2.6G 2% /newshare
On both the machine.
But Whenever I am creating any file on one initiator it don't get created on
another.Why So?
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:55 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it again to
do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error messages
what you are getting)
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool > resources
> filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do that
...Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can mount it
from both nodes [then you can add a resource about it]
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I had
tested:
I ran the script on both the cluster nodes.I stopped few processes on one of
node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry
I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine and
restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What Entry
I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
****************************************************************************
***
SFNet iSCSI Driver Version ...4:0.1.11-6(03-Aug-2007)
****************************************************************************
***
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
****************************************************************************
***
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.r
pm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ########################################### [
50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.053
ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.033
ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64 time=0.029
ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64 time=0.026
ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64 time=0.030
ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134 as
Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as Already
been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
_____
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says that
:
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0 path=???
Entry?
If you wish you can create a separate partition. Else create a file & give
the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be copied,
disclosed to, retained or used by, any other party. If you are not an
intended recipient then please promptly delete this e-mail and any
attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dirk.schulz at kinzesberg.de Sun Jul 13 17:23:50 2008
From: dirk.schulz at kinzesberg.de (Dirk H. Schulz)
Date: Sun, 13 Jul 2008 19:23:50 +0200
Subject: [Linux-cluster] cluster service not running any more
Message-ID: <421FFAB7307706E7651DAD7C@file.wkd-druck.org>
Hi folks,
I have setup a cluster on 5.2 with system-config-cluster. It is quite
simple: the only service is an ip ressource that is switched.
The cluster has started up fine the first time, the virtual ip was where
ist belonged. Since then I have not changed anything, I simply had to
restart the machines for other reasons.
Now nothing works as it should:
- shutting down clurgmgrd normally (service rgmanager stop) is impossible;
even kill -9 does not work. I have to call "reboot" twice to force a reboot
to stop clurgmgrd.
- after reboot I can manually start the cluster again (did not venture to
do it with system startup), the daemons start, nothing unusual is logged,
but
a) the service containing the ip ressource is not started
b) clustat on the primary node moans a "timed out trying to connect to
Ressource Group Manager"
c) clustat on both nodes shows the node state, but does not list the
service
I have tried everything to get the environement clean (shutdown the
firewall, set selinux to permissive, etc.), but the result is always the
same. Since I did not change anything after the first successfull start of
the cluster, I wonder
- if there is some run time data/temporary files the ressource group
manager writes to disk and tries to reread after reboot (remember, I had to
kill it by violent force to be able to reboot my machines)
- if it is possible at all to successfully run a cluster with cman and
clurgmgrd.
In case it helps here is my cluster.conf:
The logs show the nodes successfully joining the cluster and such stuff and
as last clurgmgrd starting, then nothing more from cluster daemons.
Any hint or help is appreciated. I am stuck and do not know where to look
at.
Dirk
From bfields at fieldses.org Sun Jul 13 20:20:16 2008
From: bfields at fieldses.org (J. Bruce Fields)
Date: Sun, 13 Jul 2008 16:20:16 -0400
Subject: [Linux-cluster] gfs2, kvm setup
In-Reply-To: <20080712033308.GA29498@fieldses.org>
References: <20080707184928.GE14291@fieldses.org>
<20080708221533.GI15038@fieldses.org>
<1215593064.3411.6.camel@localhost.localdomain>
<48747BF6.2060001@redhat.com> <20080709154004.GC5780@fieldses.org>
<4874DE36.6030704@redhat.com> <20080709163222.GF5780@fieldses.org>
<1215696434.4011.161.camel@quoit>
<20080711232529.GH23069@fieldses.org>
<20080712033308.GA29498@fieldses.org>
Message-ID: <20080713202016.GA2810@fieldses.org>
On Fri, Jul 11, 2008 at 11:33:08PM -0400, bfields wrote:
> On Fri, Jul 11, 2008 at 07:25:29PM -0400, bfields wrote:
> > On Thu, Jul 10, 2008 at 02:27:14PM +0100, Steven Whitehouse wrote:
> > > a packet thats supposedly from .129 except that its mac address is now
> > > 0:ff:1d:e9:b9:a3. So it looks like the .129 address might be configured
> > > on two different nodes, either that or there is something odd going on
> > > with bridging.
> >
> > Th mystery mac address 00:ff:1d:e9:b9:a3 of both vnet0 and vnet4. vnet0
> > is the bridge, which has ip .1 on the host, and which is also the
> > interface that wireshark is being run on. The other two addresses are
> > the mac addresses of the (virtual) ethernet interfaces inside the two
> > kvm's, with ip's .129 and .130 respectively. So .130 is sending to the
> > expected mac address for .129, but responses from .130 are getting the
> > mac address of vnet0/vnet4.
> >
> > I'm running wireshark on the host on vnet0. Just out of curiosity, I
> > ran it on the host on vnet1 instead, and this time saw the first DLM
> > connection made from ip .1 and piglet2's mac address. Erp. Ok, I'll
> > experiment some more and look at the /sbin/ip output.
>
> Bah, yes, I clearly got the network configuration completely screwed up
> at some point--it must be trying to do some kind of NAT, though that
> clearly makes no sense. I'll get this untangled and then I think it
> should be OK....
Problem found. So the network configuration that libvirt sets up has 4
interfaces (one for each of the 4 kvm guests) all bridged together on
the host, with NAT setup to give the guests access to the outside world.
That looks like this:
root at pig:~# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 192.168.122.0/24 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
OK, fine, except that packets exchanged between the hosts on the bridge
also seem to be going through that POSTROUTING chain, so tcp
connectsions between the guests work--sort of--but they're all getting
NAT'd so they appear to come from 192.168.122.1, so the dlm complains
about connection from a non cluster host".
So my gfs2 mount finally succeeds after:
root at pig:~# iptables -t nat -I POSTROUTING -s 192.168.122.0/24 -d 192.168.122.0/24 -j ACCEPT
ptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 192.168.122.0/24 192.168.122.0/24
MASQUERADE all -- 192.168.122.0/24 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I don't know if that's the right fix. In any case, the original
behavior certainly looks to me like a bug in libvirt.
Thanks for your patience! I should have caught that much sooner....
--b.
From ajeet.singh.raina at logica.com Mon Jul 14 04:31:45 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 10:01:45 +0530
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared Storage..
In-Reply-To: <00a801c8e47d$72d6e4f0$5884aed0$@gr>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1795D@in-ex004.groupinfra.com>
I have gone through these but no docs says anything for installing GFS
on iSCSi based Storage Setup.
Since I have no shared Storage but rather have iSCSI Kindda
Configuration.
I request You to just provide me some hint/doc which will be helpful for
quick setup for testing purpose.
Ajeet
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Theophanis
Kontogiannis
Sent: Sunday, July 13, 2008 5:45 AM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hello,
Yes for instant access from all nodes to the file, you need a cluster
aware file system like GFS (or GFS2 - still in experimental stage).
You can try the following links:
http://www.redhat.com/docs/manuals/csgfs/ (under GFS section)
http://gfs.wikidev.net/Main_Page
BR
Theophanis Kontogiannis
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 2:07 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Brilliant ...Its Worked.
I think GFS will enable us to see the files instantly on both the
Cluster Nodes.
Any Doc related to "Setting Up GFS"?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 4:08 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
You are getting login to same iscsi server(ip address) using iscsi
commands so both are connected to same shared storage ...
Just mount from one node and create some files on it ...unmount from
that node and mount it from other node and see
if created files from first node are visible or no ...
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 4:01 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I rebooted all the machine and this time it seems to work.
But again getting stucked with something.
I can see :
# df -h
/dev/sda1 2.8G 37M 2.6G 2% /newshare
On both the machine.
But Whenever I am creating any file on one initiator it don't get
created on another.Why So?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:55 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
When you mount the file system check using df command if it is really
mounted or no ..
Why don't you just stop iscsi service on both nodes and restart it
again to do clean operation..
Please search in some other forums also where you might get same
information available already .(do googling with whatever error
messages what you are getting)
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 3:44 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Hai..I have successfully setup iSCSI target and Initiator.I am able to :
Create a partition and file system on earlier raw partition.
I mounted the partition as:
#mount /dev/sda1 /newshare(mount point mentioned on cluster tool >
resources > filesystem.
Provided e2label /dev/sda1 DATA
But When I tried to restart the iscsi on the next cluster node it showed
me:
Removing iscsi driver: ERROR: Module iscsi_sfnet is in use
Whats this error all about?
Now its showing on both the node?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 3:21 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
To dicover this volume from both nodes, hopefully you are aware of these
iscsi commands
Just giving examples
1) First discover if these volumes are visible
1) # iscsiadm --mode discovery --type sendtargets --portal 10.1.40.222
(where 10.1.40.222 is IP address of iscsi )
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov
10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware
10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov,
prov-goldilocks1,p3vmware,p2vmware volumes [whichever is created]
2)Login to iscsi
iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov
--portal 10.1.40.222 .login
3)do cat /proc/partitions
It should show you /sd **
4)mount that /dev/sd* to any of cluster [it should allow you to mount
from both nodes
Just read some iscsi manuals and do this [withought GUI you can do
that ...Add new resource basically related to clustering resource which
automatically
Mount your shared device when cluster manager is started )
So better configure it using iscsi commands and see whether you can
mount it from both nodes [then you can add a resource about it]
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 12:33 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Ya,I have now created /newshare directory on the both scsi initiator
machine(cluster nodes).
I made the following entry thru system-config-cluster:
Resource >> Add New Resource >> Filesystem
Name : Sharedstorage
Mount Point : /newshare
Device : /dev/sda6
Option :
Filesystem type : ext3
Saved the file and sent to the other Cluster Nodes.
Now What Next?
How will I know if the Shared Storage is seen through both the Cluster
Nodes?
Earlier I had a script called duoscript on both the Cluster Nodes.What I
had tested:
I ran the script on both the cluster nodes.I stopped few processes on
one of node,suddenly other took the responsibility.
Now where should I put the script on shared Storage(target)?
Pls Help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mujtaba, Sayed
Mohammed
Sent: Friday, July 11, 2008 12:26 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Re:When I click on Resource >> File System on Cluster Tool...It asked
for Mount point, Device, Option,Name,filesystem id, filesystem
type..What Entry I need to make ?
Create one directory as mount point , Select any file system which you
want to create in list ,you can choose default file system ID there ..
GUI will do the rest ..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Friday, July 11, 2008 11:45 AM
To: linux-cluster at redhat.com
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Anyway, I am successful in setting Up iSCSI iniatiator and Target.
What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine.
I put :
Lun 0 path=/dev/sda6
And That Did job for me.
Now I can easily see:
[root at BL01DL385 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
The "Virtual DISk" Entry confirms that.
Now I am making entry in
#system-config-cluster and Want to know what exact entry I need to make
here:
When I click on Resource >> File System on Cluster Tool...It asked for
Mount point, Device, Option,Name,filesystem id, filesystem type..What
Entry I need to make ?
My machine address is 10.14.236.134.
Path where Unformatted Partition made is /dev/sda6
As for Now, I have only unformatted partition?Do I need to format it?
Pls Help
From: Singh Raina, Ajeet
Sent: Thursday, July 10, 2008 4:33 PM
To: 'linux clustering'
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
[root at BL02DL385 ~]# iscsi-ls
************************************************************************
*******
SFNet iSCSI Driver Version ....4:0.1.11-6(03-Aug-2007)
************************************************************************
*******
TARGET NAME : iqn.2008-07.com.logica.bl04mpdsk:storage.lun1
TARGET ALIAS :
HOST ID : 0
BUS ID : 0
TARGET ID : 0
TARGET ADDRESS : 10.14.236.134:3260,1
SESSION STATUS : ESTABLISHED AT Wed Jul 9 12:22:50 IST 2008
SESSION ID : ISID 00023d000001 TSIH 100
************************************************************************
*******
[root at BL02DL385 ~]# chkconfig iscsi on
[root at BL02DL385 ~]#
I guess it worked.Finally ISCSI Setup Done.
What is the next Step?
Pls help
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:28 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I followed as said in the doc and found it this way:
[root at BL02DL385 ~]# rpm -ivh iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm
warning: iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature:
NOKEY, key ID 9b3c94f4
Preparing... ###########################################
[100%]
1:iscsi-initiator-utils ###########################################
[100%]
[root at BL02DL385 ~]# vi /etc/iscsi.conf
DiscoveryAddress=10.14.236.134
# OutgoingUsername=fred
# OutgoingPassword=uhyt6h
# and/or
#
DiscoveryAddress=10.14.236.134
# IncomingUsername=mary
# IncomingPassword=kdhjkd9l
#
[root at BL02DL385 ~]# service iscsi start
Checking iscsi config: [ OK ]
Loading iscsi driver: [ OK ]
Starting iscsid: [ OK ]
[root at BL02DL385 ~]# CD /proc/scsi/scsi
-bash: CD: command not found
[root at BL02DL385 ~]# vi /proc/scsi/scsi
It is Displaying so:
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IET Model: VIRTUAL-DISK Rev: 0
Type: Direct-Access ANSI SCSI revision: 04
~
~
Is it working fine?
I will do run the same command sequence in the other Cluster Node.
Is it fine upto this point?
What Next?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 4:13 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Great !!!
I ran depmod and it ran well now.
Thanks for the link anyway.
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 3:39 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
This is related to IET. Go through their mailing list to find the
solution.
http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 3:30 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
I am Facing this Issue:
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
Logs: /var/log/messages
Jul 10 15:25:24 vjs ietd: nl_open -1
Jul 10 15:25:24 vjs ietd: netlink fd
Jul 10 15:25:24 vjs ietd: : Connection refused
Jul 10 15:25:24 vjs iscsi-target: ietd startup failed
Any idea?
I just did the following steps:
[root at vjs ~]# mkdir cluster_share
[root at vjs ~]# cd cluster_share/
[root at vjs cluster_share]# touch shared
[root at vjs cluster_share]# cd
[root at vjs ~]# mkdir /usr/src/iscsitarget
[root at vjs ~]# cd /usr/src/
debug/ iscsitarget/ kernels/ redhat/
[root at vjs ~]# cd /usr/src/iscsitarget/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh /usr/src/redhat/RPMS/
noarch/ x86_64/
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-
iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm
iscsitarget-debuginfo-0.4.12-6.x86_64.rpm
[root at vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm
/usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_
64.rpm
Preparing... ###########################################
[100%]
1:iscsitarget-kernel ###########################################
[ 50%]
2:iscsitarget ###########################################
[100%]
[root at vjs iscsitarget]# chkconfig --add iscsi-target
[root at vjs iscsitarget]# chkconfig --level 2345 iscsi-target on
[root at vjs iscsitarget]# vi /etc/ietd.conf
Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0
I had created a cluster_share Folder earlier.(Is it bocoz of
Folder?)Doubt??
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# hostname
vjs
[root at vjs iscsitarget]# vi /etc/hosts
[root at vjs iscsitarget]# ping
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or
address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
[root at vjs iscsitarget]# vjs
bash: vjs: command not found
[root at vjs iscsitarget]# ping vjs
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.053 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.033 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=2 ttl=64
time=0.029 ms
--- vjs.logica.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.029/0.038/0.053/0.011 ms, pipe 2
[root at vjs iscsitarget]# ping vjs.logica.com
PING vjs.logica.com (10.14.236.134) 56(84) bytes of data.
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=0 ttl=64
time=0.026 ms
64 bytes from vjs.logica.com (10.14.236.134): icmp_seq=1 ttl=64
time=0.030 ms
--- vjs.logica.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.026/0.028/0.030/0.002 ms, pipe 2
[root at vjs iscsitarget]# vi /etc/ietd.conf
[root at vjs iscsitarget]# service iscsi-target restart
Stoping iSCSI target service: [FAILED]
Starting iSCSI target service: FATAL: Module iscsi_trgt not found.
netlink fd
: Connection refused
[FAILED]
[root at vjs iscsitarget]#
[root at vjs iscsitarget]#
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:57 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
So I have the following Entry at my ietd.conf file:
# iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/home/vjs/sharess,Type=fileio
Alias iDISK0
#MaxConnections 6
Is above Entry Correct?
My machine Hostname is pe.logical.com.
Little confused about storage.lun1 whats that?
I have now not included any incoming or outgoing user?Its open for all.
What About Alias Entry?
Ok After this entry being made, I have confusion on client side too.
The Doc says You need to make Entry on /etc/iscsi.conf file as:
# simple iscsi.conf
DiscoveryAddress=172.30.0.28
OutgoingUserName=gfs
OutgoingPassword=secretsecret
LoginTimeout=15
DiscoveryAddress=172.30.0.28
What's the above entry means?IP??
As for My Setup I am setting up RHEL 4.0 machine with IP 10.14.236.134
as Target Machine and The two Nodes 10.14.236.106 and 10.14.236 108 as
Already been in Cluster Nodes.
Thanks for Helping me out. But You need to also Help me What Entry in
Cluster.conf I need to make after these things being completed?
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:48 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:42 PM
To: linux clustering
Subject: RE: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
Shall I need to mention Lun 0 ? is it needed?
Yes, of course it's needed
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of P, Prakash
Sent: Thursday, July 10, 2008 2:38 PM
To: linux clustering
Subject: [Linux-cluster] RE: iSCSI Setup as Alternative to Shared
Storage..
________________________________
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Thursday, July 10, 2008 2:22 PM
To: linux-cluster at redhat.com
Subject: [Linux-cluster] iSCSI Setup as Alternative to Shared Storage..
I want to setup iSCSI as I am running short of Shared Storage.
In one of the Doc
http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI it says
that :
[doc]
Install the Target
1. Install RHEL4, I used kickstart with just "@ base" for packages.
Configure the system with two drives sda and sdb or create two logical
volumes(lvm). The first disk is for the OS and the second for the iSCSI
storage
[/doc]
My Hard Disk Partition says:
[code]
[root at vjs ~]# fdisk -l
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 9729 78043770 8e Linux LVM
[/code]
[code]
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 / ext3 defaults
1 1
LABEL=/boot /boot ext3 defaults
1 2
/dev/VolGroup00/LogVol02 /data ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
none /dev/shm tmpfs defaults
0 0
none /proc proc defaults
0 0
none /sys sysfs defaults
0 0
#/dev/dvd /mnt/dvd auto
defaults,exec,noauto,enaged 0 0
/dev/hda /media/cdrom
pamconsole,exec,noauto,managed 0 0
/dev/VolGroup00/LogVol01 swap swap defaults
0 0
[/code]
Since I need to make entry on:
iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1
IncomingUser gfs secretsecret
OutgoingUser
Lun 0 Path=/dev/sdb,Type=fileio
Alias iDISK0
#MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0
path=??? Entry?
If you wish you can create a separate partition. Else create a file &
give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From fdinitto at redhat.com Mon Jul 14 05:38:56 2008
From: fdinitto at redhat.com (Fabio M. Di Nitto)
Date: Mon, 14 Jul 2008 07:38:56 +0200 (CEST)
Subject: [Linux-cluster] Cluster 2.03.05 released
Message-ID:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
The cluster team and its vibrant community are proud to announce the 6th
release from the STABLE2 branch: 2.03.05.
The STABLE2 branch collects, on a daily base, all bug fixes and the bare
minimal changes required to run the cluster on top of the most recent Linux
kernel (2.6.25) and rock solid openais (0.80.3 or higher).
The new source tarball can be downloaded here:
ftp://sources.redhat.com/pub/cluster/releases/cluster-2.03.05.tar.gz
In order to use GFS1, the Linux kernel requires a minimal patch:
ftp://sources.redhat.com/pub/cluster/releases/lockproto-exports.patch
To report bugs or issues:
https://bugzilla.redhat.com/
Would you like to meet the cluster team or members of its community?
Join us on IRC (irc.freenode.net #linux-cluster) and share your
experience with other sysadministrators or power users.
Happy clustering,
Fabio
Under the hood (from 2.03.04):
Benjamin Marzinski (3):
gnbd-kernel: Fix receiver race
[gnbd-kernel] bz 449812: disallow sending requests after a send has failed.
[gnbd-kernel] bz 442606: Switch gnbd to use deadline scheduler by default.
Bob Peterson (12):
Added an optional block-size to mkfs.gfs2
Fix build warnings in gfs2-utils.
Fix another compiler warning for 32-bit arch.
Fix build warnings from libgfs
Fix gfs_debug build warning
Ignoring gets return value in gfs_mkfs
Fix gfs_tool build warnings
Fix gfs_fsck build warnings
Fix 32-bit warning in super.c.
452004: gfs: BUG: unable to handle kernel paging request.
savemeta was not saving gfs1 journals properly.
gfs2_fsck fails: Unable to read in jindex inode.
Christine Caulfield (2):
[CMAN] Fix some compiler warnings on 64 bit systems
[CMAN] Only do timestamp check for older nodes.
Fabio M. Di Nitto (18):
[QDISK] Add better support for Xen virtual block devices
[CCS] Fix build warnings on sparc
[QDISK] Fix debug type
[QDISK] get_config_data cleanup
[QDISK] Remove duplicate debugging configuration
[MISC] Fix build errors with Fedora default build options
[MISC] Fix previous cherry pick build failure in stable branch
[QDISK] Major clean up
[GFS2] hexedit does not need syslog
[CCS] Remove duplicate header
[BUILD] Allow configuration of docdir
[BUILD] Fix docdir default path
[MISC] Documentation cleanup
[BUILD] Fix install of telnet_ssl
[BUILD] Fix telnet_ssl build
[BUILD] Add make oldconfig target
[BUILD] Add fence_lpar fencing agent to the build system
[BUILD] Clean extra kernel modules files
James Parsons (1):
Fix for 251358
Lon Hohberger (5):
Fix #362351 - make fence_xvmd work in no-cluster mode
Ancillary NOCLUSTER mode fixes for fence_xvmd
Ancillary NOCLUSTER mode fixes for fence_xvmd
[rgmanager] Make rgmanager check pbond links correctly
[rgmanager] Fix erroneous broadcast matching in ip.sh
Marek 'marx' Grac (2):
[FENCE] Bug #448822: fence_ilo doesn't work with iLO
[FENCE]: Fix #237266: New fence agent for HMC/LPAR
.gitignore | 1 +
COPYING.applications | 339 ----------------------
COPYING.libraries | 510 ---------------------------------
COPYRIGHT | 242 ----------------
Makefile | 11 +-
README.licence | 33 ---
ccs/daemon/cnx_mgr.c | 8 +
ccs/daemon/misc.c | 1 -
cman/daemon/ais.c | 4 +-
cman/daemon/commands.c | 6 +-
cman/daemon/daemon.c | 4 +-
cman/qdisk/crc32.c | 8 -
cman/qdisk/daemon_init.c | 16 +-
cman/qdisk/disk.h | 1 -
cman/qdisk/disk_util.c | 69 +-----
cman/qdisk/main.c | 88 ++----
cman/qdisk/proc.c | 8 +-
cman/qdisk/scandisk.c | 32 ++-
cman/qdisk/score.c | 56 +----
cman/qdisk/score.h | 5 -
configure | 15 +
doc/COPYING.applications | 339 ++++++++++++++++++++++
doc/COPYING.libraries | 510 +++++++++++++++++++++++++++++++++
doc/COPYRIGHT | 242 ++++++++++++++++
doc/Makefile | 17 ++
doc/README.licence | 33 +++
fence/agents/egenera/fence_egenera.pl | 22 ++-
fence/agents/ilo/fence_ilo.py | 99 ++++---
fence/agents/lib/Makefile | 2 +-
fence/agents/lib/fencing.py.py | 18 ++-
fence/agents/lib/telnet_ssl.py | 72 +++++
fence/agents/lpar/Makefile | 18 ++
fence/agents/lpar/fence_lpar.py | 97 +++++++
fence/agents/xvm/fence_xvm.c | 4 +-
fence/agents/xvm/fence_xvmd.c | 43 +++-
fence/agents/xvm/options.c | 1 -
fence/agents/xvm/xml.c | 4 +-
fence/man/fence_xvmd.8 | 7 +
gfs-kernel/src/gfs/bits.c | 2 +-
gfs/gfs_debug/readfile.c | 4 +-
gfs/gfs_fsck/fs_bits.c | 13 +-
gfs/gfs_fsck/fs_dir.c | 4 +-
gfs/gfs_fsck/fs_inode.c | 2 +-
gfs/gfs_fsck/log.c | 8 +-
gfs/gfs_fsck/main.c | 18 +-
gfs/gfs_fsck/pass2.c | 4 +-
gfs/gfs_fsck/pass5.c | 4 +-
gfs/gfs_fsck/rgrp.c | 4 +-
gfs/gfs_fsck/super.c | 19 +-
gfs/gfs_fsck/util.c | 6 +-
gfs/gfs_mkfs/main.c | 4 +-
gfs/gfs_tool/counters.c | 2 +-
gfs/gfs_tool/main.c | 2 +-
gfs/gfs_tool/misc.c | 6 +-
gfs/gfs_tool/sb.c | 11 +-
gfs/libgfs/file.c | 2 +-
gfs/libgfs/fs_bits.c | 6 +-
gfs/libgfs/fs_dir.c | 6 +-
gfs/libgfs/fs_inode.c | 2 +-
gfs/libgfs/log.c | 8 +-
gfs/libgfs/rgrp.c | 8 +-
gfs/libgfs/util.c | 6 +-
gfs2/edit/hexedit.c | 6 +-
gfs2/edit/savemeta.c | 13 +
gfs2/fsck/lost_n_found.c | 26 ++-
gfs2/libgfs2/super.c | 1 +
gfs2/man/mkfs.gfs2.8 | 11 +-
gfs2/mkfs/main_mkfs.c | 29 ++-
gfs2/quota/main.c | 19 +-
gfs2/tool/df.c | 9 +-
gnbd-kernel/src/gnbd.c | 62 ++++-
gnbd-kernel/src/gnbd.h | 3 +
make/clean.mk | 3 +-
make/defines.mk.input | 1 +
rgmanager/src/clulib/cman.c | 6 +-
rgmanager/src/clulib/daemon_init.c | 14 +-
rgmanager/src/clulib/msg_cluster.c | 26 ++-
rgmanager/src/clulib/msgtest.c | 3 +-
rgmanager/src/daemons/clurmtabd_lib.c | 2 +-
rgmanager/src/daemons/main.c | 3 +-
rgmanager/src/resources/ip.sh | 13 +-
81 files changed, 1872 insertions(+), 1514 deletions(-)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
iQIVAwUBSHrmdggUGcMLQ3qJAQLcww/9Esm6ygIuGGZ4ycMcKtcob6qmI3dcY1K3
YfaKm5g0iDF9bNQVwiZPyMLiFUdre9wxhx7Eh7rWqI/a728osxTInXktiOlo6kcR
NEkA3AyX2A2MbmJf59aTTSDzI0EJ+I2IkNv54pyXwoZVmHNBnR2a6/J/afYk16K5
hq5/SNxBSf9bGEjfo+1D7ntOwQZ8eCcIgw8FnY3kkdcM4ZkkcKKXQO8X8q4tlgXr
Euq4GUh8WjkkTKtPxxLlyMfqc9Jo/G2UwESgT0XGyEHm45Ao7ye4opVmLu8516rw
lOJje35+MkGfuCQROGZn9C4ZxGNVQf3CaiXzwYLBQKbyPiR31BaKEVOmwPiX84f5
TgOrdJWPxPHudaCUpgkEdORKl5iM8XHR+wokBegNmttF38ouA7R9ndtgv4lMbqbI
vh9GKnVfmeBjtU2TAKlvHaLsrM+EBOkG6O8Jp010cb77hVxpf3TxMi8hrN1QGdlo
1ImDzRkTvWNTaGp++MGc0mm6VGaZPsc5VCvI0KERphF8CduP4y5Qtq2fp4wWZMbR
exMqvraz1odGTRNjfit5+fEV4pV7FOYwwAjlGt7GU86qVaZHsLrJlXQ1R47lE0k1
Uuvia7lL83Prr/e7zF+AOT/Y3UVvMht+c5JP1lTV8AjIsX52BEvqyaVmQNLNA6NI
Ps1i6yrqEn8=
=ba+3
-----END PGP SIGNATURE-----
From ajeet.singh.raina at logica.com Mon Jul 14 07:07:58 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 12:37:58 +0530
Subject: [Linux-cluster] GFS Installation on iSCSI..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17960@in-ex004.groupinfra.com>
Hello Guys,
My Machine information is:
[root at BL02DL385 ~]# uname -arn
Linux BL02DL385 2.6.9-22.ELsmp #1 SMP Mon Sep 19 18:00:54 EDT 2005
x86_64 x86_64 x86_64 GNU/Linux
I have downloaded the GFS Package :
[root at BL02DL385 ~]# rpm -qa GFS
GFS-6.1.15-3
But I am getting other Packages matching my architecture.All I was
searching src package which I can rebuild myself.
But Wonder whats the steps for that?
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Mon Jul 14 09:12:34 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 14:42:34 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com>
I have an old Cluster RPM installed on my machine.Now I have got
cluster-2.03.04 Package.
How Can I install it?
When I tried untarring the package and installing,it threw the following
error:
[root at loy cluster-2.03.04]# ./configure
Configuring Makefiles for your system...
Checking tree: nothing to do
Checking kernel:
Unable to find (/usr/src/linux/Makefile)!
Make sure that:
- the above path is correct
- your kernel is properly configured and prepared.
- kernel_build and kernel_src options to configure are set properly.
[root at loy cluster-2.03.04]# cd
I am also not getting any Doc for that.
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gsrlinux at gmail.com Mon Jul 14 09:27:25 2008
From: gsrlinux at gmail.com (GS R)
Date: Mon, 14 Jul 2008 14:57:25 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17965@in-ex004.groupinfra.com>
Message-ID: <487B1BFD.9030707@gmail.com>
Singh Raina, Ajeet wrote:
>
> I have an old Cluster RPM installed on my machine.Now I have got
> cluster-2.03.04 Package.
>
> How Can I install it?
>
> When I tried untarring the package and installing,it threw the
> following error:
>
> [root at loy cluster-2.03.04]# ./configure
>
> Configuring Makefiles for your system...
>
> Checking tree: nothing to do
>
> Checking kernel:
>
> Unable to find (/usr/src/linux/Makefile)!
>
Hi Ajeet,
Check if you have the kernel development packages installed?
If yes then do a
[root at gsr1 ~]# cd /usr/src/
[root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux
and then try to ./configure again. Let us know if that helps.
Thanks
Gowrishankar Rajaiyan
From ajeet.singh.raina at logica.com Mon Jul 14 09:35:07 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 15:05:07 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <487B1BFD.9030707@gmail.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com>
I did the steps said by you but it is throwing error:
[root at BL01DL385 cluster-2.03.04]# ./configure
Configuring Makefiles for your system...
Checking tree: nothing to do
Checking kernel:
Current kernel version: 2.6.9
Minimum kernel version: 2.6.25
FAILED!
Should I have to upgrade the kernel Version.
[root at BL01DL385 cluster-2.03.04]#
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
Sent: Monday, July 14, 2008 2:57 PM
To: linux clustering
Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Singh Raina, Ajeet wrote:
>
> I have an old Cluster RPM installed on my machine.Now I have got
> cluster-2.03.04 Package.
>
> How Can I install it?
>
> When I tried untarring the package and installing,it threw the
> following error:
>
> [root at loy cluster-2.03.04]# ./configure
>
> Configuring Makefiles for your system...
>
> Checking tree: nothing to do
>
> Checking kernel:
>
> Unable to find (/usr/src/linux/Makefile)!
>
Hi Ajeet,
Check if you have the kernel development packages installed?
If yes then do a
[root at gsr1 ~]# cd /usr/src/
[root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux
and then try to ./configure again. Let us know if that helps.
Thanks
Gowrishankar Rajaiyan
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
From ajeet.singh.raina at logica.com Mon Jul 14 09:38:05 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 15:08:05 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17967@in-ex004.groupinfra.com>
Which RHEL version I need to install on my system?
Pls help.
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina,
Ajeet
Sent: Monday, July 14, 2008 3:05 PM
To: linux clustering
Subject: RE: [Linux-cluster] How to Install Cluster-2.03.<> Package..
I did the steps said by you but it is throwing error:
[root at BL01DL385 cluster-2.03.04]# ./configure
Configuring Makefiles for your system...
Checking tree: nothing to do
Checking kernel:
Current kernel version: 2.6.9
Minimum kernel version: 2.6.25
FAILED!
Should I have to upgrade the kernel Version.
[root at BL01DL385 cluster-2.03.04]#
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
Sent: Monday, July 14, 2008 2:57 PM
To: linux clustering
Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Singh Raina, Ajeet wrote:
>
> I have an old Cluster RPM installed on my machine.Now I have got
> cluster-2.03.04 Package.
>
> How Can I install it?
>
> When I tried untarring the package and installing,it threw the
> following error:
>
> [root at loy cluster-2.03.04]# ./configure
>
> Configuring Makefiles for your system...
>
> Checking tree: nothing to do
>
> Checking kernel:
>
> Unable to find (/usr/src/linux/Makefile)!
>
Hi Ajeet,
Check if you have the kernel development packages installed?
If yes then do a
[root at gsr1 ~]# cd /usr/src/
[root at gsr1 src]# ln -s kernels/2.6.18-92.el5-x86_64 linux
and then try to ./configure again. Let us know if that helps.
Thanks
Gowrishankar Rajaiyan
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
From gsrlinux at gmail.com Mon Jul 14 10:02:54 2008
From: gsrlinux at gmail.com (GS R)
Date: Mon, 14 Jul 2008 15:32:54 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17966@in-ex004.groupinfra.com>
Message-ID: <487B244E.7020003@gmail.com>
Singh Raina, Ajeet wrote:
> I did the steps said by you but it is throwing error:
>
>
> [root at BL01DL385 cluster-2.03.04]# ./configure
>
> Configuring Makefiles for your system...
>
> Checking tree: nothing to do
>
> Checking kernel:
> Current kernel version: 2.6.9
> Minimum kernel version: 2.6.25
> FAILED!
>
> Should I have to upgrade the kernel Version.
>
Yes. You will have to upgrade the kernel.
Check http://www.kernel.org/ for the latest stable kernel.
Thanks
Gowrishankar Rajaiyan
From ajeet.singh.raina at logica.com Mon Jul 14 10:05:39 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 15:35:39 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <487B244E.7020003@gmail.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com>
I have already setup Cluster-0.9 version Setup.Will Kernel upgradation
flush this out?
Can you let me know the quick steps to do that?
I checked with the List of kernel version and I think RHEL 4 Update 3
will be the right choice?
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
Sent: Monday, July 14, 2008 3:33 PM
To: linux clustering
Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Singh Raina, Ajeet wrote:
> I did the steps said by you but it is throwing error:
>
>
> [root at BL01DL385 cluster-2.03.04]# ./configure
>
> Configuring Makefiles for your system...
>
> Checking tree: nothing to do
>
> Checking kernel:
> Current kernel version: 2.6.9
> Minimum kernel version: 2.6.25
> FAILED!
>
> Should I have to upgrade the kernel Version.
>
Yes. You will have to upgrade the kernel.
Check http://www.kernel.org/ for the latest stable kernel.
Thanks
Gowrishankar Rajaiyan
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
From gsrlinux at gmail.com Mon Jul 14 10:16:18 2008
From: gsrlinux at gmail.com (GS R)
Date: Mon, 14 Jul 2008 15:46:18 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B17969@in-ex004.groupinfra.com>
Message-ID: <487B2772.4090009@gmail.com>
Singh Raina, Ajeet wrote:
> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation
> flush this out?
>
Upgrading the kernel should not flush out anything simply because your
previous kernel is intact and you can boot into it.
But make sure you do a /-Uvh/ and not a/ -ivh/.
> Can you let me know the quick steps to do that?
>
quick steps of what? Not clear what steps you are expecting here.
> I checked with the List of kernel version and I think RHEL 4 Update 3
> will be the right choice?
>
>
I am not sure about the RHEL version here. Thats for you to confirm it. :-)
Thanks
Gowrishankar Rajaiyan
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
> Sent: Monday, July 14, 2008 3:33 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>
> Singh Raina, Ajeet wrote:
>
>> I did the steps said by you but it is throwing error:
>>
>>
>> [root at BL01DL385 cluster-2.03.04]# ./configure
>>
>> Configuring Makefiles for your system...
>>
>> Checking tree: nothing to do
>>
>> Checking kernel:
>> Current kernel version: 2.6.9
>> Minimum kernel version: 2.6.25
>> FAILED!
>>
>> Should I have to upgrade the kernel Version.
>>
>>
> Yes. You will have to upgrade the kernel.
> Check http://www.kernel.org/ for the latest stable kernel.
>
> Thanks
> Gowrishankar Rajaiyan
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
From ajeet.singh.raina at logica.com Mon Jul 14 10:20:28 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 15:50:28 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <487B2772.4090009@gmail.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com>
I am newbie to Kernel Upgradation.I downloaded the patch but donno know
how to proceed further.The patch is in .bzip2 format and all I did is
run bunzip2 and that did the untar for me.
Can you help me with further step?
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
Sent: Monday, July 14, 2008 3:46 PM
To: linux clustering
Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Singh Raina, Ajeet wrote:
> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation
> flush this out?
>
Upgrading the kernel should not flush out anything simply because your
previous kernel is intact and you can boot into it.
But make sure you do a /-Uvh/ and not a/ -ivh/.
> Can you let me know the quick steps to do that?
>
quick steps of what? Not clear what steps you are expecting here.
> I checked with the List of kernel version and I think RHEL 4 Update 3
> will be the right choice?
>
>
I am not sure about the RHEL version here. Thats for you to confirm it.
:-)
Thanks
Gowrishankar Rajaiyan
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
> Sent: Monday, July 14, 2008 3:33 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>
> Singh Raina, Ajeet wrote:
>
>> I did the steps said by you but it is throwing error:
>>
>>
>> [root at BL01DL385 cluster-2.03.04]# ./configure
>>
>> Configuring Makefiles for your system...
>>
>> Checking tree: nothing to do
>>
>> Checking kernel:
>> Current kernel version: 2.6.9
>> Minimum kernel version: 2.6.25
>> FAILED!
>>
>> Should I have to upgrade the kernel Version.
>>
>>
> Yes. You will have to upgrade the kernel.
> Check http://www.kernel.org/ for the latest stable kernel.
>
> Thanks
> Gowrishankar Rajaiyan
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
From gsrlinux at gmail.com Mon Jul 14 10:38:33 2008
From: gsrlinux at gmail.com (GS R)
Date: Mon, 14 Jul 2008 16:08:33 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B1796A@in-ex004.groupinfra.com>
Message-ID: <487B2CA9.3070601@gmail.com>
Singh Raina, Ajeet wrote:
> I am newbie to Kernel Upgradation.I downloaded the patch but donno know
> how to proceed further.The patch is in .bzip2 format and all I did is
> run bunzip2 and that did the untar for me.
> Can you help me with further step?
>
Hope you are doing this on a test machine.
Do not try patching your kernel if you are not sure what you are doing.
That might be harmful.
Try downloading the complete kernel RPM and upgrade it.
Check for kernel RPMS:
http://rpmfind.net/linux/rpm2html/search.php?query=kernel&submit=Search+...&system=&arch=
http://rpmfind.net/linux/rpm2html/search.php?query=kernel-devel&submit=Search+...&system=&arch=
ftp://rpmfind.net/linux/fedora/releases/9/Everything/i386/os/Packages/kernel-2.6.25-14.fc9.i686.rpm
Thanks
Gowrishankar Rajaiyan
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
> Sent: Monday, July 14, 2008 3:46 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>
> Singh Raina, Ajeet wrote:
>
>> I have already setup Cluster-0.9 version Setup.Will Kernel upgradation
>> flush this out?
>>
>>
> Upgrading the kernel should not flush out anything simply because your
> previous kernel is intact and you can boot into it.
> But make sure you do a /-Uvh/ and not a/ -ivh/.
>
>> Can you let me know the quick steps to do that?
>>
>>
> quick steps of what? Not clear what steps you are expecting here.
>
>> I checked with the List of kernel version and I think RHEL 4 Update 3
>> will be the right choice?
>>
>>
>>
> I am not sure about the RHEL version here. Thats for you to confirm it.
> :-)
>
> Thanks
> Gowrishankar Rajaiyan
>
>
>
>> -----Original Message-----
>> From: linux-cluster-bounces at redhat.com
>> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
>> Sent: Monday, July 14, 2008 3:33 PM
>> To: linux clustering
>> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>>
>> Singh Raina, Ajeet wrote:
>>
>>
>>> I did the steps said by you but it is throwing error:
>>>
>>>
>>> [root at BL01DL385 cluster-2.03.04]# ./configure
>>>
>>> Configuring Makefiles for your system...
>>>
>>> Checking tree: nothing to do
>>>
>>> Checking kernel:
>>> Current kernel version: 2.6.9
>>> Minimum kernel version: 2.6.25
>>> FAILED!
>>>
>>> Should I have to upgrade the kernel Version.
>>>
>>>
>>>
>> Yes. You will have to upgrade the kernel.
>> Check http://www.kernel.org/ for the latest stable kernel.
>>
>> Thanks
>> Gowrishankar Rajaiyan
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>> This e-mail and any attachment is for authorised use by the intended
>>
> recipient(s) only. It may contain proprietary material, confidential
> information and/or be subject to legal privilege. It should not be
> copied, disclosed to, retained or used by, any other party. If you are
> not an intended recipient then please promptly delete this e-mail and
> any attachment and all copies and inform the sender. Thank you.
>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
From ajeet.singh.raina at logica.com Mon Jul 14 11:46:05 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 17:16:05 +0530
Subject: [Linux-cluster] How to Install Cluster-2.03.<> Package..
In-Reply-To: <487B2CA9.3070601@gmail.com>
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796B@in-ex004.groupinfra.com>
I downloaded an old cluster version 2.00.00<>
And tried to install.
[root at BL02DL385 cluster-2.00.00]# ./configure
configure gnbd-kernel
Configuring Makefiles for your system...
Can't open /usr/src/linux-2.6/include/linux/version.h at ./configure
line 95.
configure ccs
^[[D
Configuring Makefiles for your system...
Completed Makefile configuration
configure cman
Configuring Makefiles for your system...
Completed Makefile configuration
configure group
Configuring Makefiles for your system...
Completed Makefile configuration
configure dlm
Configuring Makefiles for your system...
Completed Makefile configuration
configure fence
Configuring Makefiles for your system...
Completed Makefile configuration
configure gfs-kernel
Configuring Makefiles for your system...
Can't open /usr/src/linux-2.6/include/linux/version.h at ./configure
line 107.
configure gfs
Configuring Makefiles for your system...
Completed Makefile configuration
configure gfs2
Configuring Makefiles for your system...
Completed Makefile configuration
configure gnbd
Configuring Makefiles for your system...
Completed Makefile configuration
configure rgmanager
Configuring Makefiles for your system...
Completed Makefile configuration
[root at BL02DL385 cluster-2.00.00]# ls
ccs cman dlm fence gfs2 gnbd group
rgmanager
clumon configure doc gfs gfs-kernel gnbd-kernel Makefile
scripts
[root at BL02DL385 cluster-2.00.00]# make
make -C gnbd-kernel all
make[1]: Entering directory `/root/cluster-2.00.00/gnbd-kernel'
make -C src all
make[2]: Entering directory `/root/cluster-2.00.00/gnbd-kernel/src'
make -C M=/root/cluster-2.00.00/gnbd-kernel/src modules
USING_KBUILD=yes
make: *** M=/root/cluster-2.00.00/gnbd-kernel/src: No such file or
directory. Stop.
make: Entering an unknown directorymake: Leaving an unknown
directorymake[2]: *** [all] Error 2
make[2]: Leaving directory `/root/cluster-2.00.00/gnbd-kernel/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/cluster-2.00.00/gnbd-kernel'
make: *** [all] Error 2
Any idea why now this issue am I facing?
-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
Sent: Monday, July 14, 2008 4:09 PM
To: linux clustering
Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
Singh Raina, Ajeet wrote:
> I am newbie to Kernel Upgradation.I downloaded the patch but donno
know
> how to proceed further.The patch is in .bzip2 format and all I did is
> run bunzip2 and that did the untar for me.
> Can you help me with further step?
>
Hope you are doing this on a test machine.
Do not try patching your kernel if you are not sure what you are doing.
That might be harmful.
Try downloading the complete kernel RPM and upgrade it.
Check for kernel RPMS:
http://rpmfind.net/linux/rpm2html/search.php?query=kernel&submit=Search+
...&system=&arch=
http://rpmfind.net/linux/rpm2html/search.php?query=kernel-devel&submit=S
earch+...&system=&arch=
ftp://rpmfind.net/linux/fedora/releases/9/Everything/i386/os/Packages/ke
rnel-2.6.25-14.fc9.i686.rpm
Thanks
Gowrishankar Rajaiyan
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
> Sent: Monday, July 14, 2008 3:46 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>
> Singh Raina, Ajeet wrote:
>
>> I have already setup Cluster-0.9 version Setup.Will Kernel
upgradation
>> flush this out?
>>
>>
> Upgrading the kernel should not flush out anything simply because your
> previous kernel is intact and you can boot into it.
> But make sure you do a /-Uvh/ and not a/ -ivh/.
>
>> Can you let me know the quick steps to do that?
>>
>>
> quick steps of what? Not clear what steps you are expecting here.
>
>> I checked with the List of kernel version and I think RHEL 4 Update 3
>> will be the right choice?
>>
>>
>>
> I am not sure about the RHEL version here. Thats for you to confirm
it.
> :-)
>
> Thanks
> Gowrishankar Rajaiyan
>
>
>
>> -----Original Message-----
>> From: linux-cluster-bounces at redhat.com
>> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of GS R
>> Sent: Monday, July 14, 2008 3:33 PM
>> To: linux clustering
>> Subject: Re: [Linux-cluster] How to Install Cluster-2.03.<> Package..
>>
>> Singh Raina, Ajeet wrote:
>>
>>
>>> I did the steps said by you but it is throwing error:
>>>
>>>
>>> [root at BL01DL385 cluster-2.03.04]# ./configure
>>>
>>> Configuring Makefiles for your system...
>>>
>>> Checking tree: nothing to do
>>>
>>> Checking kernel:
>>> Current kernel version: 2.6.9
>>> Minimum kernel version: 2.6.25
>>> FAILED!
>>>
>>> Should I have to upgrade the kernel Version.
>>>
>>>
>>>
>> Yes. You will have to upgrade the kernel.
>> Check http://www.kernel.org/ for the latest stable kernel.
>>
>> Thanks
>> Gowrishankar Rajaiyan
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>> This e-mail and any attachment is for authorised use by the intended
>>
> recipient(s) only. It may contain proprietary material, confidential
> information and/or be subject to legal privilege. It should not be
> copied, disclosed to, retained or used by, any other party. If you are
> not an intended recipient then please promptly delete this e-mail and
> any attachment and all copies and inform the sender. Thank you.
>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail and any attachment is for authorised use by the intended
recipient(s) only. It may contain proprietary material, confidential
information and/or be subject to legal privilege. It should not be
copied, disclosed to, retained or used by, any other party. If you are
not an intended recipient then please promptly delete this e-mail and
any attachment and all copies and inform the sender. Thank you.
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
From ajeet.singh.raina at logica.com Mon Jul 14 12:14:00 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 17:44:00 +0530
Subject: [Linux-cluster] KNowing CLuster Version..
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com>
How can I know which cluster I have installed my system with.
I can see the version through system-config-cluster > Help.And it says:
1.9.<>.
I don't even see any entry in cluster.conf which shows the cluster
version?
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From Norbert.Nemeth at mscibarra.com Mon Jul 14 12:26:46 2008
From: Norbert.Nemeth at mscibarra.com (Nemeth, Norbert)
Date: Mon, 14 Jul 2008 14:26:46 +0200
Subject: [Linux-cluster] RE: KNowing CLuster Version..
In-Reply-To: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com>
References: <0139539A634FD04A99C9B8880AB70CB209B1796C@in-ex004.groupinfra.com>
Message-ID:
# cman_tool status
1st line
Norbert N?meth
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Monday, July 14, 2008 2:14 PM
To: linux clustering
Subject: [Linux-cluster] KNowing CLuster Version..
How can I know which cluster I have installed my system with.
I can see the version through system-config-cluster > Help.And it says: 1.9.<>.
I don't even see any entry in cluster.conf which shows the cluster version?
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
________________________________
NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error.
Local registered entity: MSCI KFT
Metropolitan Court acting as the Court of Registry
Registered office: 1138 Budapest, N?pf?rdo utca 22, Hungary
Registration No. 01-09-885383
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajeet.singh.raina at logica.com Mon Jul 14 12:31:30 2008
From: ajeet.singh.raina at logica.com (Singh Raina, Ajeet)
Date: Mon, 14 Jul 2008 18:01:30 +0530
Subject: [Linux-cluster] RE: KNowing CLuster Version..
In-Reply-To:
Message-ID: <0139539A634FD04A99C9B8880AB70CB209B1796D@in-ex004.groupinfra.com>
[root at BL02DL385 ~]# cman_tool status
Protocol version: 5.0.1
Config version: 74
Cluster name: Test_Cluster
Cluster ID: 59828
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 2
Expected_votes: 1
Total_votes: 2
Quorum: 1
Active subsystems: 1
Node name: BL02DL385
Node addresses: 10.14.236.106
That's not correct.It shows 5.0.1 but what I can see ftp://sources.redhat.com/pub/cluster/releases/ .It doesn't matches with any.
Actually I am planning to install the same version of Cluster since I am finding difficult to get GFS-module-smp package for my RHEL 4 Update 2 x86_64 system.
Can you help me PlssS?
________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Nemeth, Norbert
Sent: Monday, July 14, 2008 5:57 PM
To: linux clustering
Subject: [Linux-cluster] RE: KNowing CLuster Version..
# cman_tool status
1st line
Norbert N?meth
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Singh Raina, Ajeet
Sent: Monday, July 14, 2008 2:14 PM
To: linux clustering
Subject: [Linux-cluster] KNowing CLuster Version..
How can I know which cluster I have installed my system with.
I can see the version through system-config-cluster > Help.And it says: 1.9.<>.
I don't even see any entry in cluster.conf which shows the cluster version?
Pls Help
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
________________________________
NOTICE: If received in error, please destroy and notify sender. Sender does not intend to waive confidentiality or privilege. Use of this email is prohibited when received in error.
Local registered entity: MSCI KFT
Metropolitan Court acting as the Court of Registry
Registered office: 1138 Budapest, N?pf?rd? utca 22, Hungary
Registration No. 01-09-885383
This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: