[linux-lvm] Shared VG, Separate LVs

Eric Ren zren at suse.com
Tue Nov 14 04:52:25 UTC 2017


Had a look at your setup, I have one question:

Did you check if your active-passive model HA stack can always work 
correctly and stably by
putting one node into offline state?

I noticed you didn't configure LVM resource agent to manage your VG's 
(de)activation task,
not sure if it can always work as expect, so have more exceptional 
checking :)

Eric


On 11/03/2017 02:38 PM, Indivar Nair wrote:
> Hi Eric, All,
>
> Thanks for the input. I have got it working.
>
> Here is what I did -
> -------------------------------------------------------------------------------------------------------------------------------------------------------
> Cluster Setup:
> 2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02
> Common storage array between both nodes (8 shared volumes, presented 
> as /dev/mapper/mpatha to /dev/mapper/mpathh)
> 2 Port NICs, bonded (bond0) in each node
>
> Resource group grp_xxx (nd01 preferred) -
> Mount Point: /clstr01-xxx
> Cluster IP: 172.16.0.101/24 <http://172.16.0.101/24>
>
> Resource group grp_yyy (nd02 preferred) -
> Mount Point: /clstr01-yyy
> Cluster IP: 172.16.0.102/24 <http://172.16.0.102/24>
>
>
> On both nodes:
> --------------
> Edit /etc/lvm/lvm.conf, and configure 'filter' and 'global_filter' 
> parameters to scan only the required (local and shared) devices.
>
> Then run -
> # /sbin/lvmconf --enable-cluster
> Rebuild initramfs -
> # mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img-orig
> # dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)
>
> Reboot both nodes.
> --------------
>
>
>
> After rebooting both nodes, run the following commands on any one node:
> --------------
> # pcs cluster start --all
> # pcs resource create dlm ocf:pacemaker:controld op monitor 
> interval=30s on-fail=fence clone interleave=true ordered=true
> # pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s 
> on-fail=fence clone interleave=true ordered=true
> # pcs constraint order start dlm-clone then clvmd-clone
> # pcs constraint colocation add clvmd-clone with dlm-clone
>
>
> # pvcreate /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # vgcreate -Ay -cy clstr_vg01 /dev/mapper/mpath{a,b,c,d,e,f,g,h}
> # lvcreate -L 100T -n lv01 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv01
> # lvcreate -L 100T -n lv02 clstr_vg01
> # mkfs.xfs /dev/clstr_vg01/lv02
>
>
> # pcs resource create xxx_mount ocf:heartbeat:Filesystem 
> device=/dev/clstr_vg01/lv01 directory=/clstr01-xxx fstype=xfs --group 
> xxx_grp --disabled
>
> # pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.101 
> cidr_netmask=24 nic=bond0:0 op monitor interval=30s --group xxx_grp 
> --disabled
>
> # pcs constraint location xxx_grp prefers clstr01-nd01=50
> # pcs constraint order start clvmd-clone then xxx_grp
>
> # pcs resource enable xxx_mount
> # pcs resource enable xxx_ip_01
>
>
> # pcs resource create yyy_mount ocf:heartbeat:Filesystem 
> device=/dev/clstr_vg01/lv02 directory=/clstr01-yyy fstype=xfs --group 
> yyy_grp --disabled
>
> # pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2 ip=172.16.0.102 
> cidr_netmask=24 nic=bond0:1 op monitor interval=30s --group yyy_grp 
> --disabled
>
> # pcs constraint location yyy_grp prefers clstr01-nd02=50
> # pcs constraint order start clvmd-clone then yyy_grp
>
> # pcs resource enable yyy_mount
> # pcs resource enable yyy_ip_01
> --------------
>
>
> # pcs resource show
> --------------
> -------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> Regards,
>
>
> Indivar Nair
>
> On Mon, Oct 16, 2017 at 8:36 AM, Eric Ren <zren at suse.com 
> <mailto:zren at suse.com>> wrote:
>
>     Hi,
>
>     On 10/13/2017 06:40 PM, Indivar Nair wrote:
>>     Thanks Eric,
>>
>>     I want to keep a single VG so that I can get the bandwidth (LVM
>>     Striping) of all the disks (PVs)
>>       PLUS
>>     the flexibility to adjust the space allocation between both LVs.
>>     Each LV will be used by  different departments. With 1 LV on
>>     different hosts, I can distribute the Network Bandwidth too.
>>     I would also like to take snapshots of each LV before backing up.
>>
>>     I have been reading more about CLVM+Pacemaker options.
>>     I can see that it is possible to have the same VG activated on
>>     multiple hosts for a GFSv2 filesystem.
>>     In which case, it is the same PVs, VG and LV getting activated on
>>     all hosts.
>
>     OK! It sounds reasonable.
>
>>
>>     In my case, we will have the same PVs and VG activated on both
>>     hosts, but LV1 on Host01 and LV2 on Host02. I paln to use ext4 or
>>     XFS filesystems.
>>
>>     Is there some possibility that it would work?
>
>     As said in the last mail, the new resource agent [4] will probably
>     work for you, but I didn't test this case yet. It's easy to have a
>     try - the RA is just shell
>     script, you can just copy LVM-activate to
>     /usr/lib/ocf/resource.d/heartbeat/ (assume you've installed
>     resource-agents package), and then configure
>     "clvm + LVM-activate" for pacemaker [5]. Please report back if it
>     doesn't work for you.
>
>     The LVM-activate RA is WIP. We are thinking if we should merge it
>     into the old LVM RA. So it may changes at any time.
>
>     [5]
>     https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html
>     <https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html>
>
>>
>>
>>
>>         [1]
>>         https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm
>>         <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm>
>>         [2]
>>         https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM
>>         <https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM>
>>         [3]
>>         https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html
>>         <https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html>
>>         [4] https://github.com/ClusterLabs/resource-agents/pull/1040
>>         <https://github.com/ClusterLabs/resource-agents/pull/1040>\
>>
>
>     Eric
>
>
>
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20171114/ba77719e/attachment.htm>


More information about the linux-lvm mailing list