<p dir="ltr">Ok but is not possible to ignore fence? </p>
<div class="gmail_quote">Il 01/feb/2014 22:09 "Digimer" <<a href="mailto:lists@alteeve.ca">lists@alteeve.ca</a>> ha scritto:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Ooooh, I'm not sure what option you have then. I suppose fence_virtd/fence_xvm is your best option, but you're going to need to have the admin configure the fence_virtd side.<br>
<br>
On 01/02/14 03:50 PM, nik600 wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
My problem is that i don't have root access at host level.<br>
<br>
Il 01/feb/2014 19:49 "Digimer" <<a href="mailto:lists@alteeve.ca" target="_blank">lists@alteeve.ca</a><br>
<mailto:<a href="mailto:lists@alteeve.ca" target="_blank">lists@alteeve.ca</a>>> ha scritto:<br>
<br>
    On 01/02/14 01:35 PM, nik600 wrote:<br>
<br>
        Dear all<br>
<br>
        i need some clarification about clustering with rhel 6.4<br>
<br>
        i have a cluster with 2 node in active/passive configuration, i<br>
        simply<br>
        want to have a virtual ip and migrate it between 2 nodes.<br>
<br>
        i've noticed that if i reboot or manually shut down a node the<br>
        failover<br>
        works correctly, but if i power-off one node the cluster doesn't<br>
        failover on the other node.<br>
<br>
        Another stange situation is that if power off all the nodes and then<br>
        switch on only one the cluster doesn't start on the active node.<br>
<br>
        I've read manual and documentation at<br>
<br>
        <a href="https://access.redhat.com/__site/documentation/en-US/Red___Hat_Enterprise_Linux/6/html/__Cluster_Administration/index.__html" target="_blank">https://access.redhat.com/__<u></u>site/documentation/en-US/Red__<u></u>_Hat_Enterprise_Linux/6/html/_<u></u>_Cluster_Administration/index.<u></u>__html</a><br>

        <<a href="https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html" target="_blank">https://access.redhat.com/<u></u>site/documentation/en-US/Red_<u></u>Hat_Enterprise_Linux/6/html/<u></u>Cluster_Administration/index.<u></u>html</a>><br>

<br>
        and i've understand that the problem is related to fencing, but the<br>
        problem is that my 2 nodes are on 2 virtual machine , i can't<br>
        control<br>
        hardware and can't issue any custom command on the host-side.<br>
<br>
        I've tried to use fence_xvm but i'm not sure about it because if<br>
        my VM<br>
        has powered-off, how can it reply to fence_vxm messags?<br>
<br>
        Here my logs when i power off the VM:<br>
<br>
        ==> /var/log/cluster/fenced.log <==<br>
        Feb 01 18:50:22 fenced fencing node mynode02<br>
        Feb 01 18:50:53 fenced fence mynode02 dev 0.0 agent fence_xvm<br>
        result:<br>
        error from agent<br>
        Feb 01 18:50:53 fenced fence mynode02 failed<br>
<br>
        I've tried to force the manual fence with:<br>
<br>
        fence_ack_manual mynode02<br>
<br>
        and in this case the failover works properly.<br>
<br>
        The point is: as i'm not using any shared filesystem but i'm only<br>
        sharing apache with a virtual ip, i won't have any split-brain<br>
        scenario<br>
        so i don't need fencing, or not?<br>
<br>
        So, is there the possibility to have a simple "dummy" fencing?<br>
<br>
        here is my config.xml:<br>
<br>
        <?xml version="1.0"?><br>
        <cluster config_version="20" name="hacluster"><br>
                  <fence_daemon clean_start="0" post_fail_delay="0"<br>
        post_join_delay="0"/><br>
                  <cman expected_votes="1" two_node="1"/><br>
                  <clusternodes><br>
                          <clusternode name="mynode01" nodeid="1" votes="1"><br>
                                  <fence><br>
                                          <method name="mynode01"><br>
                                                  <device domain="mynode01"<br>
        name="mynode01"/><br>
                                          </method><br>
                                  </fence><br>
                          </clusternode><br>
                          <clusternode name="mynode02" nodeid="2" votes="1"><br>
                                  <fence><br>
                                          <method name="mynode02"><br>
                                                  <device domain="mynode02"<br>
        name="mynode02"/><br>
                                          </method><br>
                                  </fence><br>
                          </clusternode><br>
                  </clusternodes><br>
                  <fencedevices><br>
                          <fencedevice agent="fence_xvm" name="mynode01"/><br>
                          <fencedevice agent="fence_xvm" name="mynode02"/><br>
                  </fencedevices><br>
                  <rm log_level="7"><br>
                          <failoverdomains><br>
                                  <failoverdomain name="MYSERVICE"<br>
        nofailback="0"<br>
        ordered="0" restricted="0"><br>
                                          <failoverdomainnode<br>
        name="mynode01"<br>
        priority="1"/><br>
                                          <failoverdomainnode<br>
        name="mynode02"<br>
        priority="2"/><br>
                                  </failoverdomain><br>
                          </failoverdomains><br>
                          <resources/><br>
                          <service autostart="1" exclusive="0"<br>
        name="MYSERVICE"<br>
        recovery="relocate"><br>
                                  <ip address="192.168.1.239"<br>
        monitor_link="on"<br>
        sleeptime="2"/><br>
        <apache config_file="conf/httpd.conf" name="apache"<br>
        server_root="/etc/httpd" shutdown_wait="0"/><br>
                          </service><br>
                  </rm><br>
        </cluster><br>
<br>
        Thanks to all in advance.<br>
<br>
<br>
    The fence_virtd/fence_xvm agent works by using multicast to talk to<br>
    the VM host. So the "off" confirmation comes from the hypervisor,<br>
    not the target.<br>
<br>
    Depending on your setup, you might find better luck with fence_virsh<br>
    (I have to use this as there is a known multicast issue with Fedora<br>
    hosts). Can you try, as a test if nothing else, if 'fence_virsh'<br>
    will work for you?<br>
<br>
    fence_virsh -a <host ip> -l root -p <host root pw> -n <virsh name<br>
    for target vm> -o status<br>
<br>
    If this works, it should be trivial to add to cluster.conf. If that<br>
    works, then you have a working fence method. However, I would<br>
    recommend switching back to fence_xvm if you can. The fence_virsh<br>
    agent is dependent on libvirtd running, which some consider a risk.<br>
<br>
    hth<br>
<br>
    --<br>
    Digimer<br>
    Papers and Projects: <a href="https://alteeve.ca/w/" target="_blank">https://alteeve.ca/w/</a><br>
    What if the cure for cancer is trapped in the mind of a person<br>
    without access to education?<br>
<br>
    --<br>
    Linux-cluster mailing list<br>
    <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a> <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.<u></u>com</a>><br>
    <a href="https://www.redhat.com/__mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/__<u></u>mailman/listinfo/linux-cluster</a><br>
    <<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/<u></u>mailman/listinfo/linux-cluster</a><u></u>><br>
<br>
<br>
<br>
</blockquote>
<br>
<br>
-- <br>
Digimer<br>
Papers and Projects: <a href="https://alteeve.ca/w/" target="_blank">https://alteeve.ca/w/</a><br>
What if the cure for cancer is trapped in the mind of a person without access to education?<br>
<br>
-- <br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/<u></u>mailman/listinfo/linux-cluster</a><br>
</blockquote></div>