[Linux-cluster] LVM2 over GNBD import?

Ryan Thomson thomsonr at ucalgary.ca
Thu Dec 9 03:09:47 UTC 2004


And it turns out to be something stupid. Appearently you need to run 'pvscan'
after importing the GNBD device, to think of the time I wasted on this... at
least I learned something :)

--
Ryan

Ryan Thomson (thomsonr at ucalgary.ca) wrote:
>
> More information:
>
> When I use 'dd' to zero out the partition table I get this error afterwards
> when trying to use pvcreate on the GNBD import:
>
> [root at wolverine ~]# pvcreate /dev/gnbd/pool1
> Failed to wipe new metadata area
>   /dev/gnbd/pool1: Format-specific setup of physical volume failed.
>   Failed to setup physical volume "/dev/gnbd/pool1"
>
> blockdev gives me this error when I try to re-read it after I zero the
> partition table with'dd' (unmounted, non-gnbd exported):
>
> [root at pool-serv1 ~]# blockdev --rereadpt /dev/hdc
> BLKRRPART: Input/output error
>
> I know I'm doing something wrong, but I just can't seem to figure it out.
>
>
> Ryan Thomson (thomsonr at ucalgary.ca) wrote:
> >
> > Hello.
> >
> > I am currently playing around with the GFS and the cluster tools which I
> > checked out of CVS on December 3rd.  I've created a mock setup using three
> > cluster nodes, one of which is exporting a disk via GNBD over a private
> > network to the other two nodes.
> >
> > They physical setup looks like this:
> >
> > nfs-serv1------|
> >                |----pool-serv1
> > nfs-serv2------|
> >
> > What I thought I wanted to do was export /dev/hdc from pool-serv1 over GNBD,
> > import it on one nfs-serv node, use LVM2 to create some logical volumes, mount
> > it on the other nfs-serv node and then slap GFS over the logical volumes so
> > both nfs-servers can utilize the same LV concurrently. I either get a "Device
> > /dev/gnbd/pool1 not found" error or a metadata error depending on the state of
> > the partition table on the said block device. I get "Device /dev/gnbd/pool1
> > not found" when there are no partitions on the disk and I get the metadata
> > error when I use 'dd' to zero out the partition table.
> >
> > I can execute pvcreate on pool-serv1 sometimes, but not others and I can't
> > figure out under which situations it "works" and which it doesn't exact.
> > Either way, when it "works", none of the other nodes seem to see the PV, VG or
> > LGs I create locally on pool-serv1.
> >
> > What I've done so far on every node:
> >
> > ccsd
> > cman_tool join
> > fence_tool join
> > clvmd
> >
> > After those commands, every node seems to join the cluster perfectly. Looking
> > in /proc/cluster confirms this.
> >
> > My /etc/cluster/cluster.conf is as follow:
> >
> > <?xml version="1.0"?>
> > <cluster name="GNBD_SAN_TEST" config_version="1">
> >
> >
> > <cman>
> >         <multicast addr="224.0.0.1"/>
> > </cman>
> >
> >
> > <clusternodes>
> >   <clusternode name="wolverine" votes="1">
> >         <fence>
> >           <method name="single">
> >                 <device name="human" ipaddr="10.1.1.1"/>
> >           </method>
> >         </fence>
> >
> >
> >         <multicast addr="224.0.0.1" interface="eth1"/>
> >   </clusternode>
> >
> >
> >   <clusternode name="skunk" votes="1">
> >         <fence>
> >           <method name="single">
> >                 <device name="human" ipaddr="10.1.1.2"/>
> >           </method>
> >         </fence>
> >         <multicast addr="224.0.0.1" interface="eth1"/>
> >   </clusternode>
> >
> >   <clusternode name="pool-serv1" votes="1">
> >         <fence>
> >           <method name="single">
> >                 <device name="human" ipaddr="10.1.1.10"/>
> >           </method>
> >         </fence>
> >         <multicast addr="224.0.0.1" interface="eth0"/>
> >   </clusternode>
> > </clusternodes>
> >
> > <fencedevices>
> >         <fencedevice name="human" agent="fence_manual"/>
> > </fencedevices>
> >
> > </cluster>
> >
> > I'm thinking this might have something to do with fencing since I've read that
> > you need to fence GNBD nodes using fence_gnbd but I have no actual foundation
> > for that assumption. Also, my understanding of fencing is... poor.
> >
> > I suppose my major question is this:
> >
> > How should I be setting this up? I want the two nfs-servers to both import the
> > same GNBD export (shared storage), see the same LVs on that GNBD block device
> > and put GFS on the LVs so both nfs-servers can read/write to the GNBD block
> > device at the same time. If I'm going about this totally the wrong way, please
> > advise.
> >
> > Any insight would be helpful.
>
>
>




More information about the Linux-cluster mailing list