[Linux-cluster] HA Clustering - Need Help

Hagmann, Michael Michael.Hagmann at hilti.com
Fri Jan 26 07:47:01 UTC 2007


Hi
 
what I can recommend ( in short ) is a RHEL4 U4+ / GFS  Cluster. When you
mount the same File system ( in the same time ) on more than one Node you
need a Clusterfilesystem ( like GFS or maybe ocfs2 )
 
Example Config:
 
RHEL4 U4 / GFS with DLM and Quorumdisk ( when you only have 2 nodes ) also
very Important is the fencing method ( we use now the iLO interface from our
HP Servers ). And for the Cluster interconnect I recommend you a separate
Network. For the Multipath connection you can use the device-mapper
multipath tools ( comes with RHEL4 U4 ) or you use the Vendor specific
Driver, like the Qlogic Driver from HP in our Case. When you don't have a
Storage box with integrated ( what i think is the best solution ) then you
can also use the lvm mirroring. See also the presentation from Heinz
Mauelshagen(
http://people.redhat.com/~heinzm/talks/MassenspeicherUnunterbrochen.odp in
German, maybe he has a English one ) 
 
Also you should always use a odd number of member (like 3,5,7,...), because
the fencing is then better. But when you have a real HA Solution, in the
most of the Time you have also Two Datacenters. And then the Cluster should
also work when one Datacenter is not available. Then you need either a new
Datacenter ;-), for the third member or you fail back to the Problem with
the fencing! And then maybe the quorum disk is the best solution.
 
We have around 20 RHEL4 / GFS Cluster in HA Configuration ( also with two
Datacenters ), but without quorum disk ( was not available in U3 ). We use
on all our Cluster the Shared Root Extension from Atix (
http://www.opensharedroot.org/documentation/the-opensharedroot-mini-howto/ )
because we come from the TruCluster / Tru64 Side and like the Shared Root
approach.
 
The last tip from me is, write a test plan and on every configchange you can
check your Installation again.
 
I hope this help
 
good luck
 
Mike


  _____  

From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Net Cerebrum
Sent: Dienstag, 23. Januar 2007 18:34
To: linux-cluster at redhat.com
Subject: [Linux-cluster] HA Clustering - Need Help


Hello All,

I am totally new to HA clustering and am trying hard to grasp the
fundamentals in a limited time frame. I have been asked by my company to
create a high availability cluster using Red Hat Cluster Suite on hardware
comprising two servers running RHEL AS 4 and one shared external storage
array. The cluster would be running in Active-Active state. Oracle Database
version 9 (not RAC) would run on one of the servers while the Oracle
Applications version 11 would run on the other. In case of failure of either
of the servers, the service would be started on the other server. Both the
servers (nodes) would be connected to the storage array through two
redundant SCSI controllers. 

Since the storage has redundant controllers, both the servers would be
connected to the storage array using two channels each and the requirement
is to make it an Active-Active Load Balanced configuration using a multipath
software. The storage vendor has suggested using the multipath option with
the mdadm software for creating multipath devices on the storage array. 

I have gone through the manuals and since this is my first attempt at high
availabilty clustering I have many doubts and questions. What file system
should be used on the external storage ? Is it better to use ext3 or Red Hat
GFS ? At certain places it is mentioned that GFS should be used only if the
number of nodes is 3 or more and GULM is being used. Since we have only two
nodes, we plan to use DLM.  It is also mentioned that GFS and CLVM may not
work on a software RAID device. Would the multipath devices created
(/dev/md0, /dev/md1, etc) be considered to be software RAID devices, though
in the real sense they are not ? Further the development team is not too
sure about the compatibility between GFS and Oracle Database and
Applications. What could be the pros and cons of using  ext3 file system in
this scenario ? 

The development team just wants one filesystem to be used on the storage
which would be mounted as /oracle on both the servers / nodes and all the
binaries and data would reside on this. Since this filesystem is going to be
mounted at boot time, my understanding is that no mounting or unmounting of
any filesystem will take place during the failover so the cluster
configuration should reflect that. The documentation repeatedly refers to
mounting of the file systems when failover takes place so that's giving rise
to a little confusion. Further there are references to a quorum partition in
documentation but I have not been able to find any provision to make use of
the same in the cluster configuration tool. 

Please help me in clarifying these issues and suggest me how to go about
setting this cluster. I would be really grateful for any suggestions and
references.

Thanks,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070126/99dde13f/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 6329 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070126/99dde13f/attachment.bin>


More information about the Linux-cluster mailing list