[Linux-cluster] Redhat without qdisk

emmanuel segura emi2fast at gmail.com
Thu Apr 26 09:37:44 UTC 2012


Hello GouNiNi

Why you don't try to find your problem with the storage?

I think this is the best thing you can do

Il giorno 26 aprile 2012 12:12, GouNiNi <gounini.geekarea at gmail.com> ha
scritto:

> Hello Emmanuel,
>
> My idea is deferent, a coponent is available and can give me some
> possibility.
> I don't need this features, why should I add something which can grow
> problems ;) ?
>
> In my case, problem was also that my storage acces was unstable, so qdisk
> was very annoying.
>
> Regards
>
> --
>  .`'`.   GouNiNi
>  :  ': :
>  `. ` .`  GNU/Linux
>   `'`    http://www.geekarea.fr
>
>
> ----- Mail original -----
> > De: "emmanuel segura" <emi2fast at gmail.com>
> > À: "linux clustering" <linux-cluster at redhat.com>
> > Envoyé: Mercredi 25 Avril 2012 18:05:09
> > Objet: Re: [Linux-cluster] Redhat without qdisk
> >
> >
> >
> > Hello GouNiNi
> >
> > I use cluster with qdisk && master_wins=1 and without heuristic
> > parameter and i can tell you, i never found a problem
> >
> > I have a normal quetion, if i have a component problem in a cluster.
> > What can i do?
> >
> > 1:Remove the component
> > 2:Look for a solution
> >
> > Sorry but i prefer the sencod choice :-)
> >
> >
> >
> > Il giorno 25 aprile 2012 15:48, GouNiNi < gounini.geekarea at gmail.com
> > > ha scritto:
> >
> >
> > Hello,
> >
> > I manage many cluster and I had lot of problem as long as I used
> > quorum device.
> > Until I remove this, my cluster are stable. I agree whith Lon
> > Hohberger and Ryan O'Hara point of view.
> >
> > You have to ask you : Do I need specific check (heuristic) or do I
> > need scenario like "all but one" ?
> > If your answer are "no", please don't use quorum device.
> >
> > The only problem you have to correct will be when network
> > communication split two nodes. Both think to have quorum and try to
> > fence the other (split brain situation). To prevent this, last
> > clusterv2 provides redondance ring (not officially supported on
> > RHEL5) and RHEL6 have similare feature I think.
> >
> > Regards,
> >
> > --
> > .`'`. GouNiNi
> > : ': :
> > `. ` .` GNU/Linux
> > `'` http://www.geekarea.fr
> >
> >
> > ----- Mail original -----
> > > De: "Gunther Schlegel" < schlegel at riege.com >
> > > À: linux-cluster at redhat.com
> > > Envoyé: Lundi 16 Avril 2012 17:15:52
> > > Objet: Re: [Linux-cluster] Redhat without qdisk
> >
> >
> > >
> > > Hi,
> > >
> > > > But if you have problem with your storage it's normale the node
> > > > goes
> > > > fenced, because your cluster services depends on storage
> > >
> > > Well, no, my clustered services do not depend on SAN storage.
> > >
> > > > Or maybe you wold like to have a cluster running without san disk
> > >
> > > I need the qdisk for two reasons:
> > >
> > > - heuristics
> > > - to safely achieve quorum in a two-node-cluster if only one node
> > > is
> > > up.
> > >
> > > regards, Gunther
> > >
> > > > Il giorno 13 aprile 2012 11:20, Gunther Schlegel
> > > > < schlegel at riege.com
> > > > <mailto: schlegel at riege.com >> ha scritto:
> > > >
> > > > Hi Lon,
> > > >
> > > > > > Why redhat made the qdisk as Tie-breakers and some people
> > > > > > from
> > > > support
> > > > > > say it's one optional or some time says is not needed?
> > > > >
> > > > > It is optional and is often not needed. It was developed
> > > > > really
> > > > for two purposes:
> > > > >
> > > > > - to help resolve fencing races (which can be resolved using
> > > > delays or other tactics)
> > > > >
> > > > > - to allow 'last-man-standing' in >2-node clusters.
> > > > >
> > > > > With qdiskd you can go from 4 to 1 node (given properly
> > > > > configured
> > > > heuristics). The other 3 nodes then, because heuristics fail,
> > > > can't
> > > > "gang up" (by forming a quorum) on the surviving node and take
> > > > over
> > > > - this means your critical service stays running and available.
> > > > The
> > > > problem is that, in practice, the "last node" is rarely able to
> > > > handle the workload.
> > > > >
> > > > > This behavior is obviated by features in corosync 2.0, which
> > > > > gives
> > > > administrators the ability to state that a -new- quorum can
> > > > only
> > > > form if all members are present (but joining an existing quorum
> > > > is
> > > > always allowed).
> > > >
> > > >
> > > > Is this in RHEL6? I am still trying to solve the following
> > > > situation:
> > > >
> > > > - 2 node cluster without need for shared storage (no gfs)
> > > > - qdiskd in place because of the heuristics.
> > > > - Cluster is fine if both nodes have network communication and
> > > > heuristics reach the minimum score.
> > > >
> > > > Problem: if the shared storage the qdisk resides on becomes
> > > > unavailable (but everything else is fine) a node will be
> > > > fenced. It
> > > > actually happens at the time the shared storage comes back
> > > > online,
> > > > the node re-establishing the storage link first wins and fences
> > > > the
> > > > other one. I try to mitigate that with loooong timeout
> > > > settings, but
> > > > therefore a necessary cluster switch eviction is also delayed.
> > > >
> > > > I would really appreciate if the qdiskd would withdraw it's
> > > > quorum
> > > > vote and not do any fencing at all. The cluster would survive
> > > > as
> > > > quorum is also gathered if the cluster network connection is
> > > > established.
> > > >
> > > > best regards, Gunther
> > > >
> > > >
> > > > Gunther Schlegel
> > > > Head of IT Infrastructure
> > > >
> > > >
> > > > --
> > > >
> > > >
> > > > .............................................................
> > > > Riege Software International GmbH Phone: +49 2159 91480
> > > > <tel:%2B49%202159%2091480>
> > > > Mollsfeld 10 Fax: +49 2159 914811
> > > > <tel:%2B49%202159%20914811>
> > > > 40670 Meerbusch Web: www.riege.com
> > > > < http://www.riege.com >
> > > > Germany E-Mail: schlegel at riege.com
> > > > <mailto: schlegel at riege.com >
> > > > -- --
> > > > Commercial Register: Managing Directors:
> > > > Amtsgericht Neuss HRB-NR 4207 Christian Riege
> > > > VAT Reg No.: DE120585842 Gabriele Riege
> > > > Johannes Riege
> > > > Tobias Riege
> > > > .............................................................
> > > > YOU CARE FOR FREIGHT, WE CARE FOR YOU
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Linux-cluster mailing list
> > > > Linux-cluster at redhat.com <mailto: Linux-cluster at redhat.com >
> > > > https://www.redhat.com/mailman/listinfo/linux-cluster
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > esta es mi vida e me la vivo hasta que dios quiera
> > > >
> > > >
> > > > --
> > > > Linux-cluster mailing list
> > > > Linux-cluster at redhat.com
> > > > https://www.redhat.com/mailman/listinfo/linux-cluster
> > >
> > > --
> > > Gunther Schlegel
> > > Head of IT Infrastructure
> > >
> > >
> > >
> > >
> > > .............................................................
> > > Riege Software International GmbH Phone: +49 2159 91480
> > > Mollsfeld 10 Fax: +49 2159 914811
> > > 40670 Meerbusch Web: www.riege.com
> > > Germany E-Mail: schlegel at riege.com
> > > -- --
> > > Commercial Register: Managing Directors:
> > > Amtsgericht Neuss HRB-NR 4207 Christian Riege
> > > VAT Reg No.: DE120585842 Gabriele Riege
> > > Johannes Riege
> > > Tobias Riege
> > > .............................................................
> > > YOU CARE FOR FREIGHT, WE CARE FOR YOU
> > >
> > >
> > >
> > >
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> > --
> > esta es mi vida e me la vivo hasta que dios quiera
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120426/ccd4e9b3/attachment.htm>


More information about the Linux-cluster mailing list