<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
<a class="moz-txt-link-abbreviated" href="mailto:linux-cluster-request@redhat.com">linux-cluster-request@redhat.com</a> wrote:
<blockquote cite="mid20080229170012.BFD7C618D51@hormel.redhat.com"
type="cite">
<pre wrap="">Send Linux-cluster mailing list submissions to
<a class="moz-txt-link-abbreviated" href="mailto:linux-cluster@redhat.com">linux-cluster@redhat.com</a>
To subscribe or unsubscribe via the World Wide Web, visit
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/linux-cluster">https://www.redhat.com/mailman/listinfo/linux-cluster</a>
or, via email, send a message with subject or body 'help' to
<a class="moz-txt-link-abbreviated" href="mailto:linux-cluster-request@redhat.com">linux-cluster-request@redhat.com</a>
You can reach the person managing the list at
<a class="moz-txt-link-abbreviated" href="mailto:linux-cluster-owner@redhat.com">linux-cluster-owner@redhat.com</a>
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."
Today's Topics:
1. RE: is clvm a must all the time in a cluster ? (Roger Pe?a)
2. Re: gfs_tool: Cannot allocate memory (Bob Peterson)
3. Ethernet Channel Bonding Configuration in RHEL Cluster Suite
Setup (Balaji)
4. R: [Linux-cluster] Ethernet Channel Bonding Configuration in
RHELCluster Suite Setup (Leandro Dardini)
5. probe a special port and fence according to the result (Peter)
6. Re: probe a special port and fence according to the result
(Lon Hohberger)
7. Re: Ethernet Channel Bonding Configuration in RHEL Cluster
Suite Setup (Lon Hohberger)
8. Re: probe a special port and fence according to the result
(Brian Kroth)
9. Re: probe a special port and fence according to the result (Peter)
10. Re: Redundant (multipath)/Replicating NFS (<a class="moz-txt-link-abbreviated" href="mailto:isplist@logicore.net">isplist@logicore.net</a>)
11. Re: Redundant (multipath)/Replicating NFS (<a class="moz-txt-link-abbreviated" href="mailto:gordan@bobich.net">gordan@bobich.net</a>)
12. Re: Redundant (multipath)/Replicating NFS (<a class="moz-txt-link-abbreviated" href="mailto:isplist@logicore.net">isplist@logicore.net</a>)
13. Re: gfs_tool: Cannot allocate memory (<a class="moz-txt-link-abbreviated" href="mailto:rhurst@bidmc.harvard.edu">rhurst@bidmc.harvard.edu</a>)
----------------------------------------------------------------------
Message: 1
Date: Thu, 28 Feb 2008 22:00:04 -0500 (EST)
From: Roger Pe?a <a class="moz-txt-link-rfc2396E" href="mailto:orkcu@yahoo.com"><orkcu@yahoo.com></a>
Subject: RE: [Linux-cluster] is clvm a must all the time in a cluster
?
To: linux clustering <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:233538.63310.qm@web50602.mail.re2.yahoo.com"><233538.63310.qm@web50602.mail.re2.yahoo.com></a>
Content-Type: text/plain; charset=iso-8859-1
--- Roger Peña <a class="moz-txt-link-rfc2396E" href="mailto:orkcu@yahoo.com"><orkcu@yahoo.com></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">--- Steffen Plotner <a class="moz-txt-link-rfc2396E" href="mailto:swplotner@amherst.edu"><swplotner@amherst.edu></a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi,
You asked below: can I run the cluster with GFS
without clvmd? The answer is yes. I believe in
having the least number of components running, and
found that clvmd had start up problems (and then
refuses to stop) after doing an update of RHEL4
during December.
</pre>
</blockquote>
<pre wrap="">I apply those updates this month so I guess I am
seeing the same as you
</pre>
<blockquote type="cite">
<pre wrap="">It is clearly possible to use GFS directly on a
</pre>
</blockquote>
<pre wrap="">SAN
</pre>
<blockquote type="cite">
<pre wrap="">based LUN.
</pre>
</blockquote>
<pre wrap="">I know that but, as you already said, the problem is
"having a uniq name for the filesystem to all the
nodes"
</pre>
<blockquote type="cite">
<pre wrap="">The trick of course is how to deal with
the /dev/sdb reference which will probably not be
the same on all hosts. To fix that use udev rules
that provide a symlink (using the serial number of
the LUN, for example) by which the GFS file system
can be referred to in /etc/fstab.
</pre>
</blockquote>
<pre wrap="">I am sure udev rules works but definitly seting that
enviroment is more complex that use a LV :-)
so, what happen is I use the shared LV just as a
local
LV?
each node will treated the same way as it treat the
LV
from the local disks. I guess that will not be a
problem as far as I do not work with the VG
metadata,
am I right?
</pre>
<blockquote type="cite">
<pre wrap="">We have converted 2 clusters in the past few
</pre>
</blockquote>
<pre wrap="">months
</pre>
<blockquote type="cite">
<pre wrap="">because we have had real problems with clvmd
misbehaving itself during startup. At this point
</pre>
</blockquote>
<pre wrap="">it
</pre>
<blockquote type="cite">
<pre wrap="">is a pleasure to let the cluster boot by itself
</pre>
</blockquote>
<pre wrap="">and
</pre>
<blockquote type="cite">
<pre wrap="">not to have to worry about GFS file systems not
being mounted (ccsd, cman, fenced, iscsi, gfs).
</pre>
</blockquote>
<pre wrap="">not activating LV even when clvmd is running? it
happen to me several times in the last month
;-)
that is why I want to get rid of lvm :-)
</pre>
</blockquote>
<pre wrap=""><!----> ^^^
I mean clvm
sorry, I want to use lvm, but I dont want to use clvm
all the time, just when, very rarely, need to create
or resize shared lv
cu
roger
__________________________________________
RedHat Certified ( RHCE )
Cisco Certified ( CCNA & CCDA )
Get a sneak peak at messages with a handy reading pane with All new Yahoo! Mail: <a class="moz-txt-link-freetext" href="http://ca.promos.yahoo.com/newmail/overview2/">http://ca.promos.yahoo.com/newmail/overview2/</a>
------------------------------
Message: 2
Date: Thu, 28 Feb 2008 21:03:36 -0600
From: Bob Peterson <a class="moz-txt-link-rfc2396E" href="mailto:rpeterso@redhat.com"><rpeterso@redhat.com></a>
Subject: Re: [Linux-cluster] gfs_tool: Cannot allocate memory
To: linux clustering <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:1204254216.3272.15.camel@technetium.msp.redhat.com"><1204254216.3272.15.camel@technetium.msp.redhat.com></a>
Content-Type: text/plain
On Thu, 2008-02-28 at 10:28 -0500, <a class="moz-txt-link-abbreviated" href="mailto:rhurst@bidmc.harvard.edu">rhurst@bidmc.harvard.edu</a> wrote:
</pre>
<blockquote type="cite">
<pre wrap="">ioctl(3, 0x472d, 0x7fbfffe300) = -1 ENOMEM (Cannot allocate
</pre>
</blockquote>
<pre wrap=""><!---->
Hi Robert,
The gfs_tool does most of its work using ioctl calls to the gfs kernel
module. Often, it tries to allocate and pass in a huge buffer to make
sure it doesn't ask for more than the kernel needs to respond with.
In some cases, it doesn't need to allocate such a big buffer.
I fixed "gfs_tool counters" for a similar ENOMEM problem with
bugzilla record 229461 about a year ago. (I don't know if that bug
record is public or locked so you may not be able to access it, which
is out of my control--sorry).
I should probably go through all the other gfs_tool functions, including
the two you mentioned, and figure out their minimum memory requirements
and change the code so it doesn't ask for so much memory.
Perhaps you can open up a bugzilla record so I can schedule this work?
Also, you didn't say whether you're on RHEL4/Centos4/similar, or
RHEL5/Centos5/similar.
Regards,
Bob Peterson
Red Hat GFS
------------------------------
Message: 3
Date: Fri, 29 Feb 2008 17:58:00 +0530
From: Balaji <a class="moz-txt-link-rfc2396E" href="mailto:balajisundar@midascomm.com"><balajisundar@midascomm.com></a>
Subject: [Linux-cluster] Ethernet Channel Bonding Configuration in
RHEL Cluster Suite Setup
To: ClusterGrp <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:47C7FA50.8080202@midascomm.com"><47C7FA50.8080202@midascomm.com></a>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Dear All,
I am new in RHEL Cluster Suite.
I have Configure Cluster and Rebooted the system and then cluster
become active in primary node and other node as passive and member
status becomes Online for both the cluster nodes
In Cluster Suite i am monitoring the resources as scripts files and
ipaddress and During network failure one of the node or both the nodes
are removed from the cluster member and All the resources are stopped
and then rebooted the system only both the system are joining into the
cluster member.
I have followed the RHEL Cluster Suite Configuration document
"rh-cs-en-4.pdf" and I have found out Ethernet Channel Bonding in Each
Cluster Nodes will avoid the network single point failure in cluster
system.
I have configured the Ethernet Channel Bonding with mode as
active-backup without fence device.
Ethernet Channel Bonding Configuration Details are
1. In " /etc/modprobe.conf" file added the following bonding driver
support
alias bond0 bonding
options bonding miimon=100 mode=1
2. Edited the "/etc/sysconfig/network-scripts/ifcfg-eth0" file added
the following configuration
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
3. Edited the "/etc/sysconfig/network-scripts/ifcfg-eth1" file added
the following configuration
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
4. Created a network script for the bonding device
"/etc/sysconfig/network-scripts/ifcfg-bond0"
DEVICE=bond0
USERCTL=no
ONBOOT=yes
IPADDR=192.168.13.109
NETMASK=255.255.255.0
GATEWAY=192.168.13.1
5. Reboot the system for the changes to take effect.
6. Configure Ethernet Channel Bonding
7. Rebooted the system and then cluster services are active on both
the nodes and member status of current node is Online and other node as
Offline
I need clarification about Ethernet Channel Bonding will work with
Fence Device or not.
I am not sure why this is happening. Can some one throw light on this.
Regards
-S.Balaji
------------------------------
Message: 4
Date: Fri, 29 Feb 2008 13:51:05 +0100
From: "Leandro Dardini" <a class="moz-txt-link-rfc2396E" href="mailto:l.dardini@comune.prato.it"><l.dardini@comune.prato.it></a>
Subject: R: [Linux-cluster] Ethernet Channel Bonding Configuration in
RHELCluster Suite Setup
To: "linux clustering" <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID:
<a class="moz-txt-link-rfc2396E" href="mailto:6F861500A5092B4C8CD653DE20A4AA0D511B91@exchange3.comune.prato.local"><6F861500A5092B4C8CD653DE20A4AA0D511B91@exchange3.comune.prato.local></a>
Content-Type: text/plain; charset="iso-8859-1"
</pre>
<blockquote type="cite">
<pre wrap="">-----Messaggio originale-----
Da: <a class="moz-txt-link-abbreviated" href="mailto:linux-cluster-bounces@redhat.com">linux-cluster-bounces@redhat.com</a>
[<a class="moz-txt-link-freetext" href="mailto:linux-cluster-bounces@redhat.com">mailto:linux-cluster-bounces@redhat.com</a>] Per conto di Balaji
Inviato: venerdì 29 febbraio 2008 13.28
A: ClusterGrp
Oggetto: [Linux-cluster] Ethernet Channel Bonding
Configuration in RHELCluster Suite Setup
Dear All,
I am new in RHEL Cluster Suite.
I have Configure Cluster and Rebooted the system and then
cluster become active in primary node and other node as
passive and member status becomes Online for both the cluster nodes
In Cluster Suite i am monitoring the resources as scripts
files and ipaddress and During network failure one of the
node or both the nodes are removed from the cluster member
and All the resources are stopped and then rebooted the
system only both the system are joining into the cluster member.
I have followed the RHEL Cluster Suite Configuration
document "rh-cs-en-4.pdf" and I have found out Ethernet
Channel Bonding in Each Cluster Nodes will avoid the network
single point failure in cluster system.
I have configured the Ethernet Channel Bonding with mode
as active-backup without fence device.
Ethernet Channel Bonding Configuration Details are
1. In " /etc/modprobe.conf" file added the following
bonding driver support
alias bond0 bonding
options bonding miimon=100 mode=1
2. Edited the "/etc/sysconfig/network-scripts/ifcfg-eth0"
file added the following configuration
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
3. Edited the "/etc/sysconfig/network-scripts/ifcfg-eth1"
file added the following configuration
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
4. Created a network script for the bonding device
"/etc/sysconfig/network-scripts/ifcfg-bond0"
DEVICE=bond0
USERCTL=no
ONBOOT=yes
IPADDR=192.168.13.109
NETMASK=255.255.255.0
GATEWAY=192.168.13.1
5. Reboot the system for the changes to take effect.
6. Configure Ethernet Channel Bonding
7. Rebooted the system and then cluster services are
active on both the nodes and member status of current node is
Online and other node as Offline
I need clarification about Ethernet Channel Bonding will
work with Fence Device or not.
I am not sure why this is happening. Can some one throw
light on this.
Regards
-S.Balaji
</pre>
</blockquote>
<pre wrap=""><!---->
When there is a network failure, each member cannot reach the other one. Each member trigger the fence script to turn off the other one. Unfortunately the network is off, so the low level network interface take the fence script connection request in mind and wait for the network to come up. When the network is again available, each member can reach the fencing device and turn off the other. There is no simple way to avoid this. You can make network near 100% fault proof using bond device or you can use a STONITH fencing device, so the first member who regain network prevents the other to fence, or you can use three members. In a network failure no one have the quorum to fence the others.
Leandro
------------------------------
Message: 5
Date: Fri, 29 Feb 2008 15:05:23 +0100
From: Peter <a class="moz-txt-link-rfc2396E" href="mailto:p.elmers@gmx.de"><p.elmers@gmx.de></a>
Subject: [Linux-cluster] probe a special port and fence according to
the result
To: linux clustering <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:99FC135E-6367-41B5-B6B5-AA1FEC5D9833@gmx.de"><99FC135E-6367-41B5-B6B5-AA1FEC5D9833@gmx.de></a>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
Hi!
I am planning to test a service for availability and if the service is
not available anymore, fence the node and start the service on another
node. Therefore, i need a clue how to probe a port with system tools.
Sure, with telnet there is a possibility to check. But i have no idea
to avoid the interactivity. So it is too difficult for me to implement
and usually not allowed in this environment.
On Solaris, there is the "ping -p" command to test the availability of
a application waiting on a port. But for Redhat, i have no clue.
Could you please help?
Peter
------------------------------
Message: 6
Date: Fri, 29 Feb 2008 09:20:23 -0500
From: Lon Hohberger <a class="moz-txt-link-rfc2396E" href="mailto:lhh@redhat.com"><lhh@redhat.com></a>
Subject: Re: [Linux-cluster] probe a special port and fence according
to the result
To: linux clustering <a class="moz-txt-link-rfc2396E" href="mailto:linux-cluster@redhat.com"><linux-cluster@redhat.com></a>
Message-ID: <a class="moz-txt-link-rfc2396E" href="mailto:20080229142023.GD6571@redhat.com"><20080229142023.GD6571@redhat.com></a>
Content-Type: text/plain; charset=us-ascii
On Fri, Feb 29, 2008 at 03:05:23PM +0100, Peter wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi!
I am planning to test a service for availability and if the service is not
available anymore, fence the node and start the service on another node.
Therefore, i need a clue how to probe a port with system tools.
Sure, with telnet there is a possibility to check. But i have no idea to
avoid the interactivity. So it is too difficult for me to implement and
usually not allowed in this environment.
On Solaris, there is the "ping -p" command to test the availability of a
application waiting on a port. But for Redhat, i have no clue.
Could you please help?
</pre>
</blockquote>
<pre wrap=""><!---->
Net Cat? (man nc ) ?
</pre>
</blockquote>
Dear All,<br>
Ping is working but the members are not in cluster and its forming a
new cluster at both the nodes <br>
The following Messages are logged in "/var/log/messages"<br>
CMAN: forming a new cluster<br>
CMAN: quorum regained, resuming activity<br>
<br>
<br>
Regards<br>
-S.Balaji<br>
<br>
<br>
<pre wrap="">then cluster services are active on both
the nodes and member status of current node is Online and other node as
Offline</pre>
<br>
</body>
</html>