[Linux-cluster] pacemaker location constraint
andrew at beekhof.net
Mon May 26 03:15:40 UTC 2014
On 22 May 2014, at 5:35 pm, C. Handel <christoph at macht-blau.org> wrote:
> > > The stripped config is:
> > yeah, don't do that. we need the whole thing (the cibadmin -Ql output in
> > your case since you're using crmsh)
> i currently mix pcs and crmsh. el6 now includes pcs and no longer crmsh, so i try to learn the new default ;)
> full output from pcs config below. There are three service groups each with an ip.
> A) ip_a and nfsserver together with filesystems should run on x430,
> B) ip_b and service_b with its filesystem and puppet on x431,
> C) ip_c and service_c with its filesystem and nothing else on x432.
> On Wed, May 21, 2014 at 6:02 PM, C. Handel <christoph at macht-blau.org> wrote:
> location constraints are somehow not honored by pacemaker 1.1.10 on el6.
> I have an IP adress which is placed first and then a volumegroup and a filesystem which choose the same node. The IP should be placed on x432, but for some reason it chooses x430. There are additional resources running (also choosing strange nodes).
The IP prefers not to run on 432 because vg_service_c is collocated with it and vg_service_c cannot run there:
vg_service_c: migration-threshold=1000000 fail-count=1000000 last-failure='Wed May 21 18:52:24 2014'
(as seen with crm_mon -f)
> the resource (pcs status):
> ip_x43c (ocf::heartbeat:IPaddr2): Started x430
> the constraint (pcs constraint)
> Resource: x43c
> Enabled on: x432 (score:10001)
> Cluster properties (pcs property)
> cluster-infrastructure: cman
> cluster-recheck-interval: 60s
> default-resource-stickiness: 10
> maintenance-mode: false
> symmetric-cluster: true
> checking the scoring vim crm_simulate -sL i get
> native_color: ip_x43c allocation score on x430: 30
> native_color: ip_x43c allocation score on x431: 0
> native_color: ip_x43c allocation score on x432: -INFINITY
> the score of 30 on x430 is ok. There is a resourcegroup with two resource with a colocation on the ip. But i can't figure out why x432 get's -INFINITY, there is no further constraint regarding any of the resources in question. I expect them to migrate to x432 after 60 seconds, but nothing happens.
> I trieds stopping vgfs_service_c, the ip remains. i stopped the ip. Started it again, comes up on x430 again.
> pacemaker version:
> The stripped config is:
> node x430
> node x431
> node x432
> primitive fs_service_c ocf:heartbeat:Filesystem \
> params device="/dev/mapper/vg_service_c-service_c" directory="/common/service-c" fstype="ext4" \
> op start interval="0" timeout="60s" \
> op stop interval="0" timeout="60s" \
> meta target-role="Started"
> primitive vg_service_c ocf:heartbeat:LVM \
> params volgrpname="vg_service_c" exclusive="true" \
> op start interval="0" timeout="120" \
> op stop interval="0" timeout="120" \
> op monitor interval="10" timeout="120"
> primitive ip_x43c ocf:heartbeat:IPaddr2 \
> params ip="18.104.22.168" \
> op monitor interval="30" timeout="20"
> group vgfs_service_c vg_service_c fs_service_c
> location location-ip_x43c-x432-10001 ip_x43c 10001: x432
> colocation colocation-vgfs_service_c-ip_x43c-INFINITY inf: vgfs_service_c ip_x43c
> property $id="cib-bootstrap-options" \
> dc-version="1.1.10-14.el6-368c726" \
> cluster-infrastructure="cman" \
> last-lrm-refresh="1400683270" \
> stonith-enabled="true" \
> stonith-action="poweroff" \
> default-resource-stickiness="10" \
> cluster-recheck-interval="60s" \
> maintenance-mode="false" \
> Linux-cluster mailing list
> Linux-cluster at redhat.com
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
More information about the Linux-cluster