[Linux-cluster] Linux-cluster Digest, Vol 83, Issue 13

Sunil_Gupta2 at Dell.com Sunil_Gupta2 at Dell.com
Wed Mar 9 12:14:17 UTC 2011


One node is offline cluster is not formed....check if multicast traffic is working...

--Sunil

From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Balaji
Sent: Wednesday, March 09, 2011 4:54 PM
To: linux-cluster at redhat.com
Subject: Re: [Linux-cluster] Linux-cluster Digest, Vol 83, Issue 13

Dear All,

    Please find attached log file for more analysis
    Please help me to solve this problem ASAP.

    Clustat Command Output is below
    [root at corviewprimary ~]# clustat
    Cluster Status for EMSCluster @ Wed Mar  9 17:00:03 2011
    Member Status: Quorate

     Member Name                                                   ID   Status
     ----------- -------                                                   ---- ------
     corviewprimary                                                    1 Online, Local
     corviewsecondary                                                  2 Offline

    [root at corviewprimary ~]#

Regards,
-S.Balaji

linux-cluster-request at redhat.com<mailto:linux-cluster-request at redhat.com> wrote:

Send Linux-cluster mailaddr:115.249.107.179ing list submissions to

        linux-cluster at redhat.com<mailto:linux-cluster at redhat.com>



To subscribe or unsubscribe via the World Wide Web, visit

        https://www.redhat.com/mailman/listinfo/linux-cluster

or, via email, send a message with subject or body 'help' to

        linux-cluster-request at redhat.com<mailto:linux-cluster-request at redhat.com>



You can reach the person managing the list at

        linux-cluster-owner at redhat.com<mailto:linux-cluster-owner at redhat.com>



When replying, please edit your Subject line so it is more specific

than "Re: Contents of Linux-cluster digest..."





Today's Topics:



   1. Re: clvmd hangs on startup (Valeriu Mutu)

   2. Re: clvmd hangs on startup (Jeff Sturm)

   3. dlm-pcmk-3.0.17-1.fc14.x86_64 and

      gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Gregory Bartholomew)

   4. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and

      gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Fabio M. Di Nitto)

   5. Re: unable to live migrate a vm in rh el 6: Migration

      unexpectedly failed (Lon Hohberger)

   6. Re: rgmanager not running (Sunil_Gupta2 at Dell.com<mailto:Sunil_Gupta2 at Dell.com>)

   7. Re: unable to live migrate a vm in rh el 6: Migration

      unexpectedly failed (Gianluca Cecchi)

   8. Re: dlm-pcmk-3.0.17-1.fc14.x86_64 and

      gfs-pcmk-3.0.17-1.fc14.x86_64 woes (Andrew Beekhof)

   9. Re: unable to live migrate a vm in rh el 6: Migration

      unexpectedly failed (Gianluca Cecchi)

  10. Re: unable to live migrate a vm in rh el 6: Migration

      unexpectedly failed (Gianluca Cecchi)





----------------------------------------------------------------------



Message: 1

Date: Tue, 8 Mar 2011 12:11:53 -0500

From: Valeriu Mutu <vmutu at pcbi.upenn.edu><mailto:vmutu at pcbi.upenn.edu>

To: linux clustering <linux-cluster at redhat.com><mailto:linux-cluster at redhat.com>

Subject: Re: [Linux-cluster] clvmd hangs on startup

Message-ID: <20110308171153.GB272 at bsdera.pcbi.upenn.edu><mailto:20110308171153.GB272 at bsdera.pcbi.upenn.edu>

Content-Type: text/plain; charset=us-ascii



Hi,



I think the problem is solved. I was using a 9000bytes MTU on the Xen virtual machines' iSCSI interface. Switching back to 1500bytes MTU caused the clvmd to start working.



On Thu, Mar 03, 2011 at 11:50:57AM -0500, Valeriu Mutu wrote:



On Wed, Mar 02, 2011 at 05:36:45PM -0500, Jeff Sturm wrote:



Double-check that the 2nd node can read and write the shared iSCSI

storage.



Reading/writing from/to the iSCSI storage device works as seen below.



On the 1st node:

[root at vm1 cluster]# dd count=10000 bs=1024 if=/dev/urandom of=/dev/mapper/pcbi-homes

10000+0 records in

10000+0 records out

10240000 bytes (10 MB) copied, 3.39855 seconds, 3.0 MB/s



[root at vm1 cluster]# dd count=10000 bs=1024 if=/dev/mapper/pcbi-homes of=/dev/null

10000+0 records in

10000+0 records out

10240000 bytes (10 MB) copied, 0.331069 seconds, 30.9 MB/s



On the 2nd node:

[root at vm2 ~]# dd count=10000 bs=1024 if=/dev/urandom of=/dev/mapper/pcbi-homes

10000+0 records in

10000+0 records out

10240000 bytes (10 MB) copied, 3.2465 seconds, 3.2 MB/s



[root at vm2 ~]# dd count=10000 bs=1024 if=/dev/mapper/pcbi-homes of=/dev/null

10000+0 records in

10000+0 records out

10240000 bytes (10 MB) copied, 0.223337 seconds, 45.8 MB/s







-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110309/dc0cbf73/attachment.htm>


More information about the Linux-cluster mailing list