[Linux-cluster] Restarting GFS2 without reboot
Vladimir Melnik
v.melnik at uplink.ua
Tue Nov 26 12:41:25 UTC 2013
On Tue, Nov 26, 2013 at 12:17:14PM +0000, Steven Whitehouse wrote:
> > Okay, one node has gone down, but why the second node can't keep working with
> > the filesystem? :-( That's what surprises and scares me at the same time.
> Well the second node will need to ensure quorum, so you should have a
> two node set up configured. That will require some kind of tie-breaker
> so I'm guessing that you are using qdisk for that? This is why it would
> help if you posted your config, as otherwise I'm left guessing,
I'm using corosync instead of qdiskd. And here's my cluser.conf, it
looks really simple:
<?xml version="1.0"?>
<cluster name="ckvm1_pod1" config_version="5">
<clusternodes>
<clusternode name="***.host1.ckvm1.***" votes="1" nodeid="1">
<fence>
<method name="single">
<device name="host1_ipmi"/>
</method>
</fence>
</clusternode>
<clusternode name="***.host2.ckvm1.***" votes="1" nodeid="2">
<fence>
<method name="single">
<device name="host2_ipmi"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice name="host1_ipmi" agent="fence_ipmilan" ipaddr="***" login="***" passwd="***"/>
<fencedevice name="host2_ipmi" agent="fence_ipmilan" ipaddr="***" login="***" passwd="***"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
--
V.Melnik
More information about the Linux-cluster
mailing list