[Linux-cluster] Linux-cluster Digest, Vol 86, Issue 18

Balaji S skjbalaji at gmail.com
Mon Jun 20 16:13:46 UTC 2011


Thanks dominic, i have added the filter things in lvm.conf, still i am
getting same error messages, here below i am mentioning the lines i have
added in lvm.conf, still aything need to modify to avoid this kind of error
in system messages.

 filter = [ "a|/dev/mapper|", "a|/dev/sda|", "r/.*/" ]

On Mon, Jun 20, 2011 at 9:30 PM, <linux-cluster-request at redhat.com> wrote:

> Send Linux-cluster mailing list submissions to
>        linux-cluster at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
>        linux-cluster-request at redhat.com
>
> You can reach the person managing the list at
>        linux-cluster-owner at redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
>
>
> Today's Topics:
>
>   1. Re: Cluster Failover Failed (dOminic)
>   2. Re: umount failing... (dOminic)
>   3. Re: Plugged out blade from bladecenter chassis -
>      fence_bladecenter failed (dOminic)
>   4. Re: Plugged out blade from bladecenter chassis -
>      fence_bladecenter failed (Parvez Shaikh)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 19 Jun 2011 22:03:52 +0530
> From: dOminic <share2dom at gmail.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] Cluster Failover Failed
> Message-ID: <BANLkTikrF-Ge2TpQFGfng4-ob+osApbNZw at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Balaji,
>
> Yes, the reported message is harmless ... However, you can try following
>
> 1) I would suggest you to set the filter setting in lvm.conf to properly
> scan your mpath* devices and local disks.
> 2) Enable blacklist section in multipath.conf  eg:
>
> blacklist {
>       devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
>       devnode "^hd[a-z]"
> }
>
> # multipath -v2
>
> Observe the box. Check whether that helps ...
>
>
> Regards,
>
>
> On Wed, Jun 15, 2011 at 12:16 AM, Balaji S <skjbalaji at gmail.com> wrote:
>
> > Hi,
> > In my setup implemented 10 tow node cluster's which running mysql as
> > cluster service, ipmi card as fencing device.
> >
> > In my /var/log/messages i am keep getting the errors like below,
> >
> > Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdm, sector
> 0
> > Jun 14 12:50:48 hostname kernel: sd 3:0:2:2: Device not ready: <6>:
> > Current: sense key: Not Ready
> > Jun 14 12:50:48 hostname kernel:     Add. Sense: Logical unit not ready,
> > manual intervention required
> > Jun 14 12:50:48 hostname kernel:
> > Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdn, sector
> 0
> > Jun 14 12:50:48 hostname kernel: sd 3:0:2:4: Device not ready: <6>:
> > Current: sense key: Not Ready
> > Jun 14 12:50:48 hostname kernel:     Add. Sense: Logical unit not ready,
> > manual intervention required
> > Jun 14 12:50:48 hostname kernel:
> > Jun 14 12:50:48 hostname kernel: end_request: I/O error, dev sdp, sector
> 0
> > Jun 14 12:51:10 hostname kernel: sd 3:0:0:1: Device not ready: <6>:
> > Current: sense key: Not Ready
> > Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready,
> > manual intervention required
> > Jun 14 12:51:10 hostname kernel:
> > Jun 14 12:51:10 hostname kernel: end_request: I/O error, dev sdc, sector
> 0
> > Jun 14 12:51:10 hostname kernel: printk: 3 messages suppressed.
> > Jun 14 12:51:10 hostname kernel: Buffer I/O error on device sdc, logical
> > block 0
> > Jun 14 12:51:10 hostname kernel: sd 3:0:0:2: Device not ready: <6>:
> > Current: sense key: Not Ready
> > Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready,
> > manual intervention required
> > Jun 14 12:51:10 hostname kernel:
> > Jun 14 12:51:10 hostname kernel: end_request: I/O error, dev sdd, sector
> 0
> > Jun 14 12:51:10 hostname kernel: Buffer I/O error on device sdd, logical
> > block 0
> > Jun 14 12:51:10 hostname kernel: sd 3:0:0:4: Device not ready: <6>:
> > Current: sense key: Not Ready
> > Jun 14 12:51:10 hostname kernel:     Add. Sense: Logical unit not ready,
> > manual intervention required
> >
> >
> > when i am checking the multipath -ll , this all devices are in passive
> > path.
> >
> > Environment :
> >
> > RHEL 5.4 & EMC SAN
> >
> > Please suggest how to overcome this issue. Support will be highly
> helpful.
> > Thanks in Advance
> >
> >
> > --
> > Thanks,
> > BSK
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20110619/57c6e69e/attachment.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Sun, 19 Jun 2011 22:12:35 +0530
> From: dOminic <share2dom at gmail.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] umount failing...
> Message-ID: <BANLkTimXC6NZLO=GN2ew6=y=7z=VCujsOQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> selinux is in Enforced mode ( worth checking audit.log ) ? .If yes, try
> selinux to permissive or disabled mode and check .
>
> Regards,
>
> On Thu, Jun 16, 2011 at 1:48 PM, Corey Kovacs <corey.kovacs at gmail.com
> >wrote:
>
> > My appologies for not getting back sooner. I am in the middle of a move.
> >
> > I cannot post my configs or logs (yeah, not helpful I know) but
> > suffice it to say I strongly believe they are correct (I know,
> > everyone says that). I've had other people look at them just make sure
> > it wasn't a case of proofreading my own paper etc. and it always comes
> > down to the umount failing. I have 6 other identical NFS services
> > (save for the mount point/export location) and they all work
> > flawlessly. That's why I am zeroing in on the use of '/home' as the
> > culprit.
> >
> > Anyway, it's not a lot to go on I know, but I am just looking for
> > directions to search for now.
> >
> > Thanks
> >
> > Corey
> >
> > On Mon, Jun 13, 2011 at 11:44 AM, Rajveer Singh
> > <torajveersingh at gmail.com> wrote:
> > >
> > >
> > > On Thu, Jun 9, 2011 at 8:22 PM, Corey Kovacs <corey.kovacs at gmail.com>
> > wrote:
> > >>
> > >> Folks,
> > >>
> > >> I have a 5 node cluster serving out several NFS exports, one of which
> is
> > >> /home.
> > >>
> > >> All of the nfs services can be moved from node to node without problem
> > >> except for the one providing /home.
> > >>
> > >> The logs on that node indicate the umount is failing and then the
> > >> service is disabled (self-fence is not enabled).
> > >>
> > >> Even after the service is put into a failed state and then disabled
> > >> manually, umount fails...
> > >>
> > >> I had noticed recently while playing with conga that creating a
> > >> service for /home on a test cluster a warning was issued about
> > >> reserved words and as I recall (i could be wrong) /home was among the
> > >> illegal parameters for the mount point.
> > >>
> > >> I have turned everything off that I could think of which might be
> > >> "holding" the mount and have run the various iterations of lsof, find
> > >> etc. nothing shows up as having anything being actively used.
> > >>
> > >> This particular file system is 1TB.
> > >>
> > >> Is there something wrong with using /home as an export?
> > >>
> > >> Some specifics.
> > >>
> > >> RHEL5.6 (updated as of last week)
> > >> HA-LVM protecting ext3 using the newer "preferred method" with clvmd
> > >> Ext3 for exported file systems
> > >> 5 nodes.
> > >>
> > >>
> > >> Any ideas would be greatly appreciated.
> > >>
> > >> -C
> > >>
> > > Can you share your log file and cluster.conf file
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
> > >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20110619/42a94077/attachment.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Sun, 19 Jun 2011 22:14:56 +0530
> From: dOminic <share2dom at gmail.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] Plugged out blade from bladecenter
>        chassis - fence_bladecenter failed
> Message-ID: <BANLkTindmcada5yXOAZKYTp_DB9oXtQtQg at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> There is a bug  related to missing_as_off  -
> https://bugzilla.redhat.com/show_bug.cgi?id=689851 - expects the fix in
> rhel5u7 .
>
> regards,
>
> On Wed, Apr 27, 2011 at 1:59 PM, Parvez Shaikh <parvez.h.shaikh at gmail.com
> >wrote:
>
> > Hi all,
> >
> > I am using RHCS on IBM bladecenter with blade center fencing. I plugged
> out
> > a blade from blade center chassis slot and was hoping that failover to
> > occur. However when I did so, I get following message -
> >
> > fenced[10240]: agent "fence_bladecenter" reports: Failed: Unable to
> obtain
> > correct plug status or plug is not available
> > fenced[10240]: fence "blade1" failed
> >
> > Is this supported that if I plug out blade from its slot, then failover
> > occur without manual intervention? If so, which fencing must I use?
> >
> > Thanks,
> > Parvez
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20110619/e1f7ba7a/attachment.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Mon, 20 Jun 2011 10:46:41 +0530
> From: Parvez Shaikh <parvez.h.shaikh at gmail.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] Plugged out blade from bladecenter
>        chassis - fence_bladecenter failed
> Message-ID: <BANLkTikQbWhmVvXLFphx4q7tsxT1tFoLPQ at mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Thanks Dominic,
>
> Do fence_bladecenter "reboot" the blade as a part of fencing always? I have
> seen it turning the blade off by default.
>
> Through fence_bladecenter --missing-as-off...... -o off returns me a
> correct
> result when run from command line but fencing fails through "fenced". I am
> using RHEL 5.5 ES and fence_bladecenter version reports following -
>
> fence_bladecenter -V
> 2.0.115 (built Tue Dec 22 10:05:55 EST 2009)
> Copyright (C) Red Hat, Inc.  2004  All rights reserved.
>
>
> Anyway thanks for bugzilla reference
>
> Regards
>
> On Sun, Jun 19, 2011 at 10:14 PM, dOminic <share2dom at gmail.com> wrote:
>
> > There is a bug  related to missing_as_off  -
> > https://bugzilla.redhat.com/show_bug.cgi?id=689851 - expects the fix in
> > rhel5u7 .
> >
> > regards,
> >
> > On Wed, Apr 27, 2011 at 1:59 PM, Parvez Shaikh <
> parvez.h.shaikh at gmail.com>wrote:
> >
> >> Hi all,
> >>
> >> I am using RHCS on IBM bladecenter with blade center fencing. I plugged
> >> out a blade from blade center chassis slot and was hoping that failover
> to
> >> occur. However when I did so, I get following message -
> >>
> >> fenced[10240]: agent "fence_bladecenter" reports: Failed: Unable to
> obtain
> >> correct plug status or plug is not available
> >> fenced[10240]: fence "blade1" failed
> >>
> >> Is this supported that if I plug out blade from its slot, then failover
> >> occur without manual intervention? If so, which fencing must I use?
> >>
> >> Thanks,
> >> Parvez
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20110620/0386831e/attachment.html
> >
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 86, Issue 18
> *********************************************
>



-- 
Thanks,
Balaji S
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110620/6e045832/attachment.htm>


More information about the Linux-cluster mailing list