[Linux-cluster] [Linux cluster] DLM not start

Nguyễn Trường Sơn hunters1094 at gmail.com
Wed Sep 2 13:58:06 UTC 2015


How can i use fencing?

Do you mean "pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op
monitor interval=60s on-fail=fence"


It is still error.

I have Centos 7.0, with pacemaker-1.1.12-22.el7_1.2.x86_64



2015-09-02 20:39 GMT+07:00 emmanuel segura <emi2fast at gmail.com>:

> please, use fencing.
>
> 2015-09-02 13:56 GMT+02:00 Nguyễn Trường Sơn <hunters1094 at gmail.com>:
> > Dear all
> >
> > I have 2 nodes deployed cluster with gfs2, my storage is FC with
> multipath.
> >
> > I run like tutorial in
> http://clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >
> > # pcs status
> >
> > Cluster name: clustered
> > Last updated: Wed Sep  2 18:40:28 2015
> > Last change: Wed Sep  2 18:40:02 2015
> > Stack: corosync
> > Current DC: node02 (2) - partition with quorum
> > Version: 1.1.12-a14efad
> > 2 Nodes configured
> > 0 Resources configured
> >
> >
> > Online: [ node01 node02 ]
> >
> > Full list of resources:
> >
> >
> > PCSD Status:
> >   node01: Online
> >   node02: Online
> >
> > Daemon Status:
> >   corosync: active/enabled
> >   pacemaker: active/enabled
> >   pcsd: active/enabled
> >
> > When i create resource dlm:
> >
> > # pcs cluster cib dlm_cfg
> > # pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor
> > interval=60s
> > # pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1
> > # pcs -f dlm_cfg resource show
> > #  pcs cluster cib-push dlm_cfg
> >
> > #  pcs status  (get error in the resources section)
> >
> > Full list of resources:
> >
> >  Clone Set: dlm-clone [dlm]
> >      Stopped: [ node01 node02 ]
> >
> > Failed actions:
> >     dlm_start_0 on node01 'not configured' (6): call=69, status=complete,
> > exit-reason='none', last-rc-change='Wed Sep  2 18:47:13 2015',
> queued=1ms,
> > exec=50ms
> >     dlm_start_0 on node02 'not configured' (6): call=65, status=complete,
> > exit-reason='none', last-rc-change='Wed Sep  2 18:47:13 2015',
> queued=0ms,
> > exec=50ms
> >
> > And in the /var/log/pacemaker.log get error
> >
> > controld(dlm)[24304]:    2015/09/02_18:47:13 ERROR: The cluster property
> > stonith-enabled may not be deactivated to use the DLM
> > Sep 02 18:47:13 [4204] node01       lrmd:     info: log_finished:
> > finished - rsc:dlm action:start call_id:65 pid:24304 exit-code:6
> > exec-time:50ms queue-time:0ms
> > Sep 02 18:47:14 [4207] node01       crmd:     info: action_synced_wait:
> > Managed controld_meta-data_0 process 24329 exited with rc=0
> > Sep 02 18:47:14 [4207] node01       crmd:   notice: process_lrm_event:
> > Operation dlm_start_0: not configured (node=node01, call=65, rc=6,
> > cib-update=75, confirmed=true)
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_process_request:
> > Forwarding cib_modify operation for section status to master
> > (origin=local/crmd/75)
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
> > Diff: --- 0.54.17 2
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
> > Diff: +++ 0.54.18 (null)
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:     +
> > /cib:  @num_updates=18
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:     +
> >
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='dlm']/lrm_rsc_op[@id='dlm_last_0']:
> > @operation_key=dlm_start_0, @operation=start,
> > @transition-key=7:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5,
> > @transition-magic=0:6;7:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5,
> > @call-id=69, @rc-code=6, @exec-time=50, @queue-time=1
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
>  ++
> >
> /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='dlm']:
> > <lrm_rsc_op id="dlm_last_failure_0" operation_key="dlm_start_0"
> > operation="start" crm-debug-origin="do_update_resource"
> > crm_feature_set="3.0.9"
> > transition-key="7:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5"
> > transition-magic="0:6;7:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5"
> > call-id="69" rc-code="6" op-status="0" interval="0" last
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_process_request:
> > Completed cib_modify operation for section status: OK (rc=0,
> > origin=node02/crmd/555, version=0.54.18)
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
> > Diff: --- 0.54.18 2
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
> > Diff: +++ 0.54.19 (null)
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:     +
> > /cib:  @num_updates=19
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:     +
> >
> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='dlm']/lrm_rsc_op[@id='dlm_last_0']:
> > @operation_key=dlm_start_0, @operation=start,
> > @transition-key=9:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5,
> > @transition-magic=0:6;9:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5,
> > @call-id=65, @rc-code=6, @exec-time=50
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_perform_op:
>  ++
> >
> /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='dlm']:
> > <lrm_rsc_op id="dlm_last_failure_0" operation_key="dlm_start_0"
> > operation="start" crm-debug-origin="do_update_resource"
> > crm_feature_set="3.0.9"
> > transition-key="9:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5"
> > transition-magic="0:6;9:159:0:5d440f4a-656a-4bb0-8c9b-0ed09d22c7f5"
> > call-id="65" rc-code="6" op-status="0" interval="0" last
> > Sep 02 18:47:14 [4205] node01      attrd:     info: attrd_peer_update:
> > Setting fail-count-dlm[node02]: (null) -> INFINITY from node02
> > Sep 02 18:47:14 [4205] node01      attrd:     info: attrd_peer_update:
> > Setting last-failure-dlm[node02]: (null) -> 1441194434 from node02
> > Sep 02 18:47:14 [4202] node01        cib:     info: cib_process_request:
> > Completed cib_modify operation for section status: OK (rc=0,
> > origin=node01/crmd/75, version=0.54.19)
> >
> >
> > Thank you very much.
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
> --
>   .~.
>   /V\
>  //  \\
> /(   )\
> ^`~'^
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster




-- 
**************************************
Nguyễn Trường Sơn
Tin3K50 - Hệ thống thông tin K50
ĐHBK Hà Nội
Mobile: 0904010635
Y!M: hunters_1094
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20150902/6a57f9b6/attachment.htm>


More information about the Linux-cluster mailing list