[Linux-cluster] EXT3 or GFS shared disk

brem belguebli brem.belguebli at gmail.com
Thu Sep 10 22:30:28 UTC 2009


Issue a cman_tool status on both nodes and also group_tool and post the
outputs


2009/9/11 James Marcinek <jmarc1 at jemconsult.biz>

> When I try to issue lvdisplay commands on the 2nd node (problem child) it
> was hanging... There was only one clvmd service running when I ps'd it...
> May just rebuild the thing unless someone else has another option. Rebooting
> hasn't seemed to fix it. What log files would you recommend I examine.
>
> Thanks,
>
> james
>
>
> ----- Original Message -----
> From: "brem belguebli" <brem.belguebli at gmail.com>
> To: "linux clustering" <linux-cluster at redhat.com>
> Sent: Thursday, September 10, 2009 5:31:49 PM GMT -05:00 US/Canada Eastern
> Subject: Re: [Linux-cluster] EXT3 or GFS shared disk
>
>
> The dirty flag is not pointing to any error, it's a normal status (that's
> going to be changed to something else in future releases as many people got
> worried about it).
>
>
> The message you get means that your second node cannot contact clvmd and so
> cannot access clustered VG's.
>
>
> issue a ps -ef | grep clvmd to see if it is running ( you may find 2 clvmd
> running , means that things went bad !!)
>
>
> The solution is to restart your second node (reboot, start cman, etc...)
> and check if it goes the right way.
>
>
> Brem
>
>
> 2009/9/10 James Marcinek < jmarc1 at jemconsult.biz >
>
>
> no i didn't. that might be the root cause.
>
> I was able to get rid of it but now I get these errors on the second
> cluster node
>
> connect() failed on local socket: Connection refused
> WARNING: Falling back to local file-based locking.
> Volume Groups with the clustered attribute will be inaccessible
>
> when I do a cman_tool status, I see that there is a Flag indicating: 2node
> Dirty.
>
> Is there a way to clean this up?
>
> I'm trying to get this built so there's no data on anything.
>
> Thanks,
>
> James
>
>
> ----- Original Message -----
> From: "Luis Cerezo" < Luis.Cerezo at pgs.com >
> To: "linux clustering" < linux-cluster at redhat.com >
>
>
>
> Sent: Thursday, September 10, 2009 4:59:16 PM GMT -05:00 US/Canada Eastern
> Subject: Re: [Linux-cluster] EXT3 or GFS shared disk
>
> This is shared storage correct?
> have you tried the pvscan/vgscan/lvscan dance?
>
> did you create the vg with the -c y ?
>
> -luis
>
>
> Luis E. Cerezo
> Global IT
> GV: +1 412 223 7396
>
> On Sep 10, 2009, at 3:47 PM, James Marcinek wrote:
>
> > it turns out after my initial issues I turned off clvmd on both
> > nodes. One of them comes up nice but the other hangs... I'm going to
> > boot in runlevel 1 and check my lvm stuff this might be the root
> > cause (hoping) of why it's not they're not becoming members (one
> > sees the phantom lvm and the other does not)
> >
> >
> > ----- Original Message -----
> > From: "Luis Cerezo" < Luis.Cerezo at pgs.com >
> > To: "linux clustering" < linux-cluster at redhat.com >
> > Sent: Thursday, September 10, 2009 3:58:35 PM GMT -05:00 US/Canada
> > Eastern
> > Subject: Re: [Linux-cluster] EXT3 or GFS shared disk
> >
> > you really got to the cluster in quorum before the lvm to work nicely.
> >
> > what is the output of clustat?
> >
> > do you have clvmd on both nodes up and running?
> >
> > did you run pvscan/vgscan/lvscan after initializing the volume?
> >
> > what did vgdisplay say? what it set to not avail etc?
> >
> > -luis
> >
> > Luis E. Cerezo
> > Global IT
> > GV: +1 412 223 7396
> >
> > On Sep 10, 2009, at 2:38 PM, James Marcinek wrote:
> >
> >> I'm running 5.3 and it gave me a locking issue and indicated that it
> >> couldn't create the logical volume. However it showed up and I had
> >> some issues getting rid of it.
> >>
> >> I couldn't get rid of the LVM because it couldn't locate the id. In
> >> the end I rebooted the node and then I could get rid of it...
> >>
> >> I would prefer to use logical volumes if possible. The packages are
> >> there, one of the nodes did have an issue with the clvmd not
> >> starting...
> >>
> >> I'm working on rebuilding the cluster.conf. It kept coming up as in
> >> the config but 'not a member'. When I went into system-config-
> >> cluster on one node it showed up as a member but not the other. Went
> >> to the other node and the same thing, it was a member but not the
> >> other.
> >>
> >> Right now, I've totally scratched the original cluster.conf. I
> >> created a new one and copied it to the second node. started cman and
> >> rgmanager and the same thing they both are in the configs but only
> >> one member in the cluster management tab...
> >>
> >> What's going on?
> >>
> >> ----- Original Message -----
> >> From: "Luis Cerezo" < Luis.Cerezo at pgs.com >
> >> To: "linux clustering" < linux-cluster at redhat.com >
> >> Sent: Thursday, September 10, 2009 3:14:00 PM GMT -05:00 US/Canada
> >> Eastern
> >> Subject: Re: [Linux-cluster] EXT3 or GFS shared disk
> >>
> >> what grief did it give you? also- what version of RHEL are you
> >> running?
> >>
> >> 5.1 has some known issues with clvmd
> >>
> >> -luis
> >>
> >> Luis E. Cerezo
> >> Global IT
> >> GV: +1 412 223 7396
> >>
> >> On Sep 10, 2009, at 12:37 PM, James Marcinek wrote:
> >>
> >>> Hello again,
> >>>
> >>> Next question.
> >>>
> >>> Again since my cluster class (back in '04) GFS wasn't around so I'm
> >>> not sure if I should use this or not in the cluster build...
> >>>
> >>> If I have an active/passive cluster where only one node needs to
> >>> have access to the file system at a given time should I just use an
> >>> ext3 partition or should I use GFS on a logical volume?
> >>>
> >>> I just tried to create an lvm shared logical volume with an ext3
> >>> parition (already did an lvmconf --enable-cluster) but it caused me
> >>> some grief and I switched it to a partition after cleaning up...
> >>>
> >>> Thanks,
> >>>
> >>> James
> >>>
> >>> --
> >>> Linux-cluster mailing list
> >>> Linux-cluster at redhat.com
> >>> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >>
> >> This e-mail, including any attachments and response string, may
> >> contain proprietary information which is confidential and may be
> >> legally privileged. It is for the intended recipient only. If you
> >> are not the intended recipient or transmission error has misdirected
> >> this e-mail, please notify the author by return e-mail and delete
> >> this message and any attachment immediately. If you are not the
> >> intended recipient you must not use, disclose, distribute, forward,
> >> copy, print or rely on this e-mail in any way except as permitted by
> >> the author.
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >>
> >> --
> >> Linux-cluster mailing list
> >> Linux-cluster at redhat.com
> >> https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> > This e-mail, including any attachments and response string, may
> > contain proprietary information which is confidential and may be
> > legally privileged. It is for the intended recipient only. If you
> > are not the intended recipient or transmission error has misdirected
> > this e-mail, please notify the author by return e-mail and delete
> > this message and any attachment immediately. If you are not the
> > intended recipient you must not use, disclose, distribute, forward,
> > copy, print or rely on this e-mail in any way except as permitted by
> > the author.
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail, including any attachments and response string, may contain
> proprietary information which is confidential and may be legally privileged.
> It is for the intended recipient only. If you are not the intended recipient
> or transmission error has misdirected this e-mail, please notify the author
> by return e-mail and delete this message and any attachment immediately. If
> you are not the intended recipient you must not use, disclose, distribute,
> forward, copy, print or rely on this e-mail in any way except as permitted
> by the author.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090911/db1219b5/attachment.htm>


More information about the Linux-cluster mailing list