[Linux-cluster] XEN VM Cluster

Marcos Ferreira da Silva marcos at digitaltecnologia.info
Wed Feb 6 19:17:43 UTC 2008


I solved the mount gfs filesystem.

I forgot to run "fence_ack_manual -n vserver1.teste.br".

Now I can mount the gfs partition.

Now I have other problem.

One node appear offline to other node.

When I use command line to migrate online the vm to node 2 it's ok.
And of node 2 to node 1, too.

Appear this in messages file.

Feb  6 17:16:21 vserver1 fenced[5362]: fencing node "vserver2.uniube.br"
Feb  6 17:16:21 vserver1 fenced[5362]: fence "vserver2.uniube.br" failed


VSERVER1

[root at vserver1 ~]# clustat
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  vserver1.uniube.br                    1 Online, Local, rgmanager
  vserver2.uniube.br                    2 Offline

  Service Name         Owner (Last)                   State
  ------- ----         ----- ------                   -----
  vm:admin             vserver1.uniube.br             started
  vm:aluno             (none)                         disabled
  service:testeAdmin   (none)                         stopped
  service:testeAluno   (none)                         stopped


VSERVER2

[root at vserver2 ~]# clustat
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  vserver1.uniube.br                    1 Offline
  vserver2.uniube.br                    2 Online, Local, rgmanager

  Service Name         Owner (Last)                   State
  ------- ----         ----- ------                   -----
  vm:admin             (none)                         stopped
  vm:aluno             (none)                         disabled
  service:testeAdmin   vserver2.uniube.br             started
  service:testeAluno   vserver2.uniube.br             started



My cluster.conf:

<?xml version="1.0"?>
 <cluster alias="cluster1" config_version="24" name="cluster1">
  <fence_daemon clean_start="0" post_fail_delay="0"
             post_join_delay="3"/>
  <fence_xvmd family="ipv4" key_file="/etc/cluster/fence_xvm.key"/>
  <clusternodes>
    <clusternode name="vserver1.teste.br" nodeid="1" votes="1">
       <fence>
         <method name="1">
           <device domain="admin" name="admin"/>
         </method>
       </fence>
    </clusternode>
    <clusternode name="vserver2.teste.br" nodeid="2" votes="1">
       <fence>
         <method name="1">
           <device domain="aluno" name="aluno"/>
         </method>
      </fence>
    </clusternode>
  </clusternodes>
  <cman expected_votes="1" two_node="1"/>
  <fencedevices>
     <fencedevice agent="fence_xvm" name="admin"/>
     <fencedevice agent="fence_manual" name="manual"/>
  </fencedevices>
  <rm>
    <failoverdomains>
      <failoverdomain name="fodAdmin" ordered="1" restricted="1">
        <failoverdomainnode name="vserver1.teste.br" priority="2"/>
        <failoverdomainnode name="vserver2.teste.br" priority="1"/>
      </failoverdomain>
      <failoverdomain name="fodAluno" ordered="1" restricted="1">
        <failoverdomainnode name="vserver1.teste.br" priority="2"/>
        <failoverdomainnode name="vserver2.teste.br" priority="1"/>
      </failoverdomain>
    </failoverdomains>
    <resources>
      <clusterfs device="/dev/VGADMIN/LVAdmin" force_unmount="1"
fsid="63078" fstype="gfs" mountpoint="/storage/admin" name="gfsadmin"
options=""/>
      <clusterfs device="/dev/VGALUNO/LVAluno" force_unmount="1"
fsid="63078" fstype="gfs" mountpoint="/storage/aluno" name="gfsaluno"
options=""/>
    </resources>
    <vm autostart="1" domain="fodAdmin" exclusive="0" name="admin"
path="/etc/xen" recovery="relocate"/>
    <vm autostart="0" domain="fodAluno" exclusive="0" name="aluno"
path="/etc/xen" recovery="relocate"/>
    <service autostart="1" domain="gfsadmin" name="testeAdmin">
      <clusterfs ref="gfsadmin"/>
    </service>
    <service autostart="1" domain="gfsaluno" name="testeAluno">
      <clusterfs ref="gfsaluno"/>
    </service>
   </rm>
</cluster>



Em Qua, 2008-02-06 às 12:07 +0100, jr escreveu:
> > How could configure a partition to share a VM config for two machines?
> > Could you send me your cluster.conf for I compare with I want to do?
> 
> no need for my cluster.conf. just use a GFS partition and it will be
> fine. (don't forget to put it into fstab)
> 
> > 
> > Then I need to have a shared partition to put the VMs config, that will be access by other machines, and a physical (LVM in a storage) to put the real machine.
> > Is it correct?
> 
> i don't know what you mean by "real machine", but your guests not only
> need the config, they will also need some storage for their system.
> that's where you need a storage that's connected to your nodes, wether
> it's luns, lvm lvs or image files, no matter. just keep in mind that if
> you are using image files, you need to place them on GFS so that every
> node in your cluster can access them the same.
> 
> > 
> > When I start a VM in a node 1 it will start in a physical device.
> > If I disconnect the node 1, will the vm migrate to node 2?
> > Will the clients connections lose?
> 
> it's just failover, which means that if the cluster sees a problem with
> one of the nodes, the other node will take over it's services, which
> basically means that the vms will be started on the other node.
> that does mean that your clients will get disconnected.
> 
> > 
> > I'm use a HP Storage and a two servers with multipath with emulex fiber channel.
> 
> should be fine.
> 
> johannes
> 
-- 
_____________________________
Marcos Ferreira da Silva
DiGital Tecnologia
Uberlândia - MG
(34) 9154-0150 / 3226-2534






More information about the Linux-cluster mailing list