<div dir="ltr">Hi Eric,<div><br></div><div>Answering your queries -</div><div><br></div><div><i><span style="font-size:12.8px">"Did you check if your active-passive model HA stack can always work correctly and stably by</span><br style="font-size:12.8px"><span style="font-size:12.8px">putting one node into offline state?"</span></i><br></div><div><br></div><div> Yes, it works perfectly while failing over and failing back.</div><div><br></div><div><i><span style="font-size:12.8px">"I noticed you didn't configure LVM resource agent to manage your VG's (de)activation task,</span><br style="font-size:12.8px"><span style="font-size:12.8px">not sure if it can always work as expect, so have more exceptional checking :)"</span><br></i></div><div><br></div><div> Strangely the Pacemaker active-passive configuration example shows VG controlled by Pacemaker, while the active-active one does not. I have taken the active-active configuration for Pacemaker and created 2 LVs, then instead of formatting it using the GFS2 clustered filesystem, I used normal XFS and made sure that it is mounted only on one node at a time. (lv01 on node 2, lv02 on node2)</div><div><br></div><div> <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-clustsetup-gfs2">https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-clustsetup-gfs2</a></div><div><br></div><div> I can see the clustered VG and LVs as soon <span style="font-size:12.8px">ocf:heartbeat:clvm is started.</span></div><div><br></div><div>Is there anything I am missing here?</div><div><br></div><div>Regards,</div><div><br></div><div><br></div><div>Indivar Nair</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Nov 14, 2017 at 10:22 AM, Eric Ren <span dir="ltr"><<a href="mailto:zren@suse.com" target="_blank">zren@suse.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Had a look at your setup, I have one question:</p>
<p>Did you check if your active-passive model HA stack can always
work correctly and stably by<br>
putting one node into offline state?</p>
<p>I noticed you didn't configure LVM resource agent to manage your
VG's (de)activation task,<br>
not sure if it can always work as expect, so have more exceptional
checking :)</p>
<p>Eric<br>
</p><div><div class="h5">
<br>
<div class="m_-8311155611117165567moz-cite-prefix">On 11/03/2017 02:38 PM, Indivar Nair
wrote:<br>
</div>
</div></div><blockquote type="cite"><div><div class="h5">
<div dir="ltr">Hi Eric, All,
<div><br>
</div>
<div>Thanks for the input. I have got it working. </div>
<div><br>
</div>
<div>Here is what I did -</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-<br>
</div>
<div>
<div>
<div>Cluster Setup:</div>
<div>2 Nodes with CentOS 7.x: clstr01-nd01, clstr01-nd02</div>
<div>Common storage array between both nodes (8 shared
volumes, presented as /dev/mapper/mpatha to
/dev/mapper/mpathh)</div>
<div>2 Port NICs, bonded (bond0) in each node</div>
<div><br>
</div>
<div>Resource group grp_xxx (nd01 preferred) - </div>
<div>Mount Point: /clstr01-xxx </div>
<div>Cluster IP: <a href="http://172.16.0.101/24" target="_blank">172.16.0.101/24</a></div>
<div><br>
</div>
<div>Resource group grp_yyy (nd02 preferred) - </div>
<div>Mount Point: /clstr01-yyy</div>
<div>Cluster IP: <a href="http://172.16.0.102/24" target="_blank">172.16.0.102/24</a></div>
<div><br>
</div>
<div><br>
</div>
<div>On both nodes:</div>
<div>--------------</div>
<div>Edit /etc/lvm/lvm.conf, and configure 'filter' and
'global_filter' parameters to scan only the required
(local and shared) devices.</div>
<div><br>
</div>
<div>Then run - </div>
<div># /sbin/lvmconf --enable-cluster</div>
<div>Rebuild initramfs - </div>
<div># mv /boot/initramfs-$(uname -r).img
/boot/initramfs-$(uname -r).img-orig</div>
<div># dracut -H -f /boot/initramfs-$(uname -r).img $(uname
-r)</div>
<div><br>
</div>
<div>Reboot both nodes.</div>
<div>--------------</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>After rebooting both nodes, run the following commands
on any one node:</div>
<div>--------------</div>
<div># pcs cluster start --all</div>
<div># pcs resource create dlm ocf:pacemaker:controld op
monitor interval=30s on-fail=fence clone interleave=true
ordered=true</div>
<div># pcs resource create clvmd ocf:heartbeat:clvm op
monitor interval=30s on-fail=fence clone interleave=true
ordered=true</div>
<div># pcs constraint order start dlm-clone then clvmd-clone</div>
<div># pcs constraint colocation add clvmd-clone with
dlm-clone</div>
<div><br>
</div>
<div><br>
</div>
<div># pvcreate /dev/mapper/mpath{a,b,c,d,e,f,<wbr>g,h}</div>
<div># vgcreate -Ay -cy clstr_vg01
/dev/mapper/mpath{a,b,c,d,e,f,<wbr>g,h}</div>
<div># lvcreate -L 100T -n lv01 clstr_vg01</div>
<div># mkfs.xfs /dev/clstr_vg01/lv01</div>
<div># lvcreate -L 100T -n lv02 clstr_vg01</div>
<div># mkfs.xfs /dev/clstr_vg01/lv02</div>
<div><br>
</div>
<div><br>
</div>
<div># pcs resource create xxx_mount
ocf:heartbeat:Filesystem device=/dev/clstr_vg01/lv01
directory=/clstr01-xxx fstype=xfs --group xxx_grp
--disabled</div>
<div><br>
</div>
<div># pcs resource create xxx_ip_01 ocf:heartbeat:IPaddr2
ip=172.16.0.101 cidr_netmask=24 nic=bond0:0 op monitor
interval=30s --group xxx_grp --disabled</div>
<div><br>
</div>
<div># pcs constraint location xxx_grp prefers
clstr01-nd01=50</div>
<div># pcs constraint order start clvmd-clone then xxx_grp</div>
<div><br>
</div>
<div># pcs resource enable xxx_mount</div>
<div># pcs resource enable xxx_ip_01</div>
<div><br>
</div>
<div><br>
</div>
<div># pcs resource create yyy_mount
ocf:heartbeat:Filesystem device=/dev/clstr_vg01/lv02
directory=/clstr01-yyy fstype=xfs --group yyy_grp
--disabled</div>
<div><br>
</div>
<div># pcs resource create yyy_ip_01 ocf:heartbeat:IPaddr2
ip=172.16.0.102 cidr_netmask=24 nic=bond0:1 op monitor
interval=30s --group yyy_grp --disabled</div>
<div><br>
</div>
<div># pcs constraint location yyy_grp prefers
clstr01-nd02=50</div>
<div># pcs constraint order start clvmd-clone then yyy_grp</div>
<div><br>
</div>
<div># pcs resource enable yyy_mount</div>
<div># pcs resource enable yyy_ip_01</div>
<div>--------------<br>
</div>
<div><br>
</div>
<div><br>
</div>
<div># pcs resource show</div>
<div>--------------</div>
</div>
<div>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-<br>
</div>
</div>
<div><br>
</div>
<div><br>
</div>
<div>Regards,</div>
<div><br>
</div>
<div><br>
</div>
<div>Indivar Nair</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Oct 16, 2017 at 8:36 AM, Eric
Ren <span dir="ltr"><<a href="mailto:zren@suse.com" target="_blank">zren@suse.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>Hi,<br>
</p>
<span>
<div class="m_-8311155611117165567m_-1693594712510700916moz-cite-prefix">On
10/13/2017 06:40 PM, Indivar Nair wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Thanks Eric,
<div><br>
</div>
<div>I want to keep a single VG so that I can get
the bandwidth (LVM Striping) of all the disks
(PVs) </div>
<div> PLUS </div>
<div>the flexibility to adjust the space allocation
between both LVs. Each LV will be used by
different departments. With 1 LV on different
hosts, I can distribute the Network Bandwidth too.</div>
<div>I would also like to take snapshots of each LV
before backing up.</div>
<div><br>
</div>
<div>I have been reading more about CLVM+Pacemaker
options.</div>
<div>I can see that it is possible to have the same
VG activated on multiple hosts for a GFSv2
filesystem.<br>
</div>
<div>In which case, it is the same PVs, VG and LV
getting activated on all hosts.</div>
</div>
</blockquote>
<br>
</span> OK! It sounds reasonable.<span><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>In my case, we will have the same PVs and VG
activated on both hosts, but LV1 on Host01 and LV2
on Host02. I paln to use ext4 or XFS filesystems.</div>
<div><br>
</div>
<div>Is there some possibility that it would work?</div>
</div>
</blockquote>
<br>
</span> As said in the last mail, the new resource agent
[4] will probably work for you, but I didn't test this
case yet. It's easy to have a try - the RA is just shell<br>
script, you can just copy LVM-activate to
/usr/lib/ocf/resource.d/heartb<wbr>eat/ (assume you've
installed resource-agents package), and then configure<br>
"clvm + LVM-activate" for pacemaker [5]. Please report
back if it doesn't work for you.<br>
<br>
The LVM-activate RA is WIP. We are thinking if we should
merge it into the old LVM RA. So it may changes at any
time.<br>
<br>
[5]
<a class="m_-8311155611117165567m_-1693594712510700916moz-txt-link-freetext" href="https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_clvm_config.html" target="_blank">https://www.suse.com/documenta<wbr>tion/sle-ha-12/book_sleha/<wbr>data/sec_ha_clvm_config.html</a><span><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<br>
</div>
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br>
[1] <a href="https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/clvm" rel="noreferrer" target="_blank">https://github.com/ClusterLabs<wbr>/resource-agents/blob/master/h<wbr>eartbeat/clvm</a><br>
[2] <a href="https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/LVM" rel="noreferrer" target="_blank">https://github.com/ClusterLabs<wbr>/resource-agents/blob/master/h<wbr>eartbeat/LVM</a><br>
[3] <a href="https://www.redhat.com/archives/linux-lvm/2017-January/msg00025.html" rel="noreferrer" target="_blank">https://www.redhat.com/archive<wbr>s/linux-lvm/2017-January/msg00<wbr>025.html</a><br>
[4] <a href="https://github.com/ClusterLabs/resource-agents/pull/1040" rel="noreferrer" target="_blank">https://github.com/ClusterLabs<wbr>/resource-agents/pull/1040</a><span class="m_-8311155611117165567m_-1693594712510700916HOEnZb"><font color="#888888">\<br>
</font></span></blockquote>
</div>
</div>
</blockquote>
<br>
Eric<br>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="m_-8311155611117165567mimeAttachmentHeader"></fieldset>
<br>
</div></div><pre>______________________________<wbr>_________________
linux-lvm mailing list
<a class="m_-8311155611117165567moz-txt-link-abbreviated" href="mailto:linux-lvm@redhat.com" target="_blank">linux-lvm@redhat.com</a>
<a class="m_-8311155611117165567moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/linux-lvm" target="_blank">https://www.redhat.com/<wbr>mailman/listinfo/linux-lvm</a>
read the LVM HOW-TO at <a class="m_-8311155611117165567moz-txt-link-freetext" href="http://tldp.org/HOWTO/LVM-HOWTO/" target="_blank">http://tldp.org/HOWTO/LVM-<wbr>HOWTO/</a></pre>
</blockquote>
<br>
</div>
</blockquote></div><br></div>