[Linux-cluster] How to run same service in parallel in RedHat Cluster 5.0
Ruben Sajnovetzky
rsajnove at cisco.com
Wed Sep 28 16:57:02 UTC 2011
I copied the full cluster.conf, I deleted everything else to ³concentrate²
in the issue.
Now I re-created everything from scratch and with only FS service. I¹m
copying here the files and
Output you requested.
Situation is still the same.
cluster.conf file:
<?xml version="1.0"?>
<cluster alias="PPM Toronto" config_version="30" name="PPM Toronto">
<fence_daemon clean_start="0" post_fail_delay="0"
post_join_delay="3"/>
<clusternodes>
<clusternode name="server-87111" nodeid="1" votes="1">
<fence/>
</clusternode>
<clusternode name="server-87112" nodeid="2" votes="1">
<fence/>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="0">
<multicast addr="224.4.5.6"/>
</cman>
<fencedevices/>
<rm>
<failoverdomains>
<failoverdomain name="PPM GW Failover"
nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="server-87111"
priority="1"/>
</failoverdomain>
<failoverdomain name="PPM Units Failover"
nofailback="1" ordered="0" restricted="1">
<failoverdomainnode name="server-87112"
priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<fs device="/dev/VolGroup00/optvol" force_fsck="1"
force_unmount="0" fsid="36845" fstype="ext3" mountpoint="/opt"
name="PPM_OPT_FS" self_fence="0"/>
</resources>
<service autostart="0" domain="PPM GW Failover"
exclusive="0" name="PPM Gateway">
<fs ref="PPM_OPT_FS"/>
</service>
<service autostart="0" domain="PPM Units Failover"
exclusive="0" name="PPM Units">
<fs ref="PPM_OPT_FS"/>
</service>
</rm>
</cluster>
------------------------------------------
/etc/fstab
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/dev/VolGroup00/homevol /home ext3 defaults 1 1
#####/dev/VolGroup00/optvol /opt ext3 defaults
1 1
------------------------------------------
[root at server-87112 cluster]# pvscan
PV /dev/sda2 VG VolGroup00 lvm2 [255.88 GB / 17.09 GB free]
Total: 1 [255.88 GB] / in use: 1 [255.88 GB] / in no VG: 0 [0 ]
[root at server-87112 cluster]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
[root@ server-87112 cluster]# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [11.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [7.78 GB] inherit
ACTIVE '/dev/VolGroup00/homevol' [100.00 GB] inherit
ACTIVE '/dev/VolGroup00/optvol' [120.00 GB] inherit
[root at server-87111 cluster]# pvscan
PV /dev/sda2 VG VolGroup00 lvm2 [255.88 GB / 17.09 GB free]
Total: 1 [255.88 GB] / in use: 1 [255.88 GB] / in no VG: 0 [0 ]
[root at server-87111 cluster]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup00" using metadata type lvm2
[root at server-87111 cluster]# lvscan
ACTIVE '/dev/VolGroup00/LogVol00' [11.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [7.78 GB] inherit
ACTIVE '/dev/VolGroup00/homevol' [100.00 GB] inherit
ACTIVE '/dev/VolGroup00/optvol' [120.00 GB] inherit
On 28-Sep-2011 12:44 PM, "Digimer" <linux at alteeve.com> wrote:
> On 09/28/2011 06:09 AM, Ruben Sajnovetzky wrote:
>> > This approach didn¹t work either :(
>> > First server started service the second couldn¹t start
>
> You only shared a small snippet of your cluster.conf config, and none of
> the other requested info. I don't know what might be missing versus omitted.
>
> --
> Digimer
> E-Mail: digimer at alteeve.com
> Freenode handle: digimer
> Papers and Projects: http://alteeve.com
> Node Assassin: http://nodeassassin.org
> "At what point did we forget that the Space Shuttle was, essentially,
> a program that strapped human beings to an explosion and tried to stab
> through the sky with fire and math?"
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110928/747886b1/attachment.htm>
More information about the Linux-cluster
mailing list