Oct 3 14:47:14 nebula1 pacemakerd: [3899]: notice: update_node_processes: 0xad17f0 Node 1172809920 now known as one, was: Oct 3 14:47:14 nebula1 stonith-ng: [3904]: info: crm_new_peer: Node one now has id: 1172809920 Oct 3 14:47:14 nebula1 stonith-ng: [3904]: info: crm_new_peer: Node 1172809920 is now known as one Oct 3 14:47:14 nebula1 crmd: [3908]: notice: crmd_peer_update: Status update: Client one/crmd now has status [online] (DC=nebula3) Oct 3 14:47:14 nebula1 crmd: [3908]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_JOIN_OFFER cause=C_HA_MESSAGE origin=route_message ] Oct 3 14:47:14 nebula1 crmd: [3908]: info: update_dc: Set DC to nebula3 (3.0.6) Oct 3 14:47:17 nebula1 crmd: [3908]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] Oct 3 14:47:17 nebula1 attrd: [3906]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Oct 3 14:47:17 nebula1 attrd: [3906]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Oct 3 14:47:23 nebula1 kernel: [ 580.785037] dlm: connecting to 1172809920 Oct 3 14:48:20 nebula1 ocfs2_controld: kill node 1172809920 - ocfs2_controld PROCDOWN Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: initiate_remote_stonith_op: Initiating remote operation off for one: e2683312-e06f-44fe-8d65-852a918b7a3c Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: call_remote_stonith: Requesting that nebula1 perform op off one Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_fence: Found 1 matching devices for 'one' Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_command: Processed st_fence from nebula1: rc=-1 Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_fence: Found 1 matching devices for 'one' Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_command: Processed st_fence from nebula2: rc=-1 Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_fence: Found 1 matching devices for 'one' Oct 3 14:48:20 nebula1 stonith-ng: [3904]: info: stonith_command: Processed st_fence from nebula3: rc=-1 Oct 3 14:48:21 nebula1 kernel: [ 638.140337] device one-dmz-pub left promiscuous mode Oct 3 14:48:21 nebula1 kernel: [ 638.188351] device one-admin left promiscuous mode Oct 3 14:48:21 nebula1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port one-dmz-pub Oct 3 14:48:21 nebula1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port one-admin Oct 3 14:48:21 nebula1 external/libvirt[5925]: notice: Domain one was stopped Oct 3 14:48:22 nebula1 stonith-ng: [3904]: notice: log_operation: Operation 'off' [5917] (call 0 from 83164f01-342e-4838-a640-ef55c7905465) for host 'one' with device 'Stonith-ONE-Frontend' returned: 0 Oct 3 14:48:22 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: Performing: stonith -t external/libvirt -T off one Oct 3 14:48:22 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: success: one 0 Oct 3 14:48:22 nebula1 external/libvirt[5967]: notice: Domain one is already stopped Oct 3 14:48:22 nebula1 ntpd[3399]: Deleting interface #8 one-admin, fe80::fc54:ff:fe6e:bfdc#123, interface stats: received=0, sent=0, dropped=0, active_time=229 secs Oct 3 14:48:22 nebula1 ntpd[3399]: Deleting interface #7 one-dmz-pub, fe80::fc54:ff:fe9e:c8e3#123, interface stats: received=0, sent=0, dropped=0, active_time=229 secs Oct 3 14:48:22 nebula1 ntpd[3399]: peers refreshed Oct 3 14:48:23 nebula1 stonith-ng: [3904]: notice: log_operation: Operation 'off' [5959] (call 0 from a5074eb0-6afa-4060-b3b9-d05e846e0c57) for host 'one' with device 'Stonith-ONE-Frontend' returned: 0 Oct 3 14:48:23 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: Performing: stonith -t external/libvirt -T off one Oct 3 14:48:23 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: success: one 0 Oct 3 14:48:23 nebula1 external/libvirt[5989]: notice: Domain one is already stopped Oct 3 14:48:23 nebula1 corosync[3674]: [TOTEM ] A processor failed, forming new configuration. Oct 3 14:48:24 nebula1 stonith-ng: [3904]: notice: log_operation: Operation 'off' [5981] (call 0 from 1fb319d9-d388-44d4-97a9-212746707e22) for host 'one' with device 'Stonith-ONE-Frontend' returned: 0 Oct 3 14:48:24 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: Performing: stonith -t external/libvirt -T off one Oct 3 14:48:24 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: success: one 0 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 22688: memb=4, new=0, lost=1 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: memb: quorum 1156032704 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: memb: nebula1 1189587136 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: memb: nebula2 1206364352 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: memb: nebula3 1223141568 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: lost: one 1172809920 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 22688: memb=4, new=0, lost=0 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: MEMB: quorum 1156032704 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: MEMB: nebula1 1189587136 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: MEMB: nebula2 1206364352 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: pcmk_peer_update: MEMB: nebula3 1223141568 Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: ais_mark_unseen_peer_dead: Node one was not seen in the previous transition Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: update_member: Node 1172809920/one is now: lost Oct 3 14:48:27 nebula1 corosync[3674]: [pcmk ] info: send_member_notification: Sending membership update 22688 to 4 children Oct 3 14:48:27 nebula1 cluster-dlm: [4127]: info: ais_dispatch_message: Membership 22688: quorum retained Oct 3 14:48:27 nebula1 cluster-dlm: [4127]: info: crm_update_peer: Node one: id=1172809920 state=lost (new) addr=r(0) ip(192.168.231.69) votes=1 born=22684 seen=22684 proc=00000000000000000000000000000000 Oct 3 14:48:27 nebula1 corosync[3674]: [TOTEM ] A processor joined or left the membership and a new membership was formed. Oct 3 14:48:27 nebula1 kernel: [ 644.579862] dlm: closing connection to node 1172809920 Oct 3 14:48:27 nebula1 cib: [3903]: info: ais_dispatch_message: Membership 22688: quorum retained Oct 3 14:48:27 nebula1 cib: [3903]: info: crm_update_peer: Node one: id=1172809920 state=lost (new) addr=r(0) ip(192.168.231.69) votes=1 born=22684 seen=22684 proc=00000000000000000000000000111312 Oct 3 14:48:27 nebula1 crmd: [3908]: info: ais_dispatch_message: Membership 22688: quorum retained Oct 3 14:48:27 nebula1 crmd: [3908]: info: ais_status_callback: status: one is now lost (was member) Oct 3 14:48:27 nebula1 crmd: [3908]: info: crm_update_peer: Node one: id=1172809920 state=lost (new) addr=r(0) ip(192.168.231.69) votes=1 born=22684 seen=22684 proc=00000000000000000000000000111312 Oct 3 14:48:27 nebula1 stonith-ng: [3904]: notice: remote_op_done: Operation off of one by nebula1 for nebula1[83164f01-342e-4838-a640-ef55c7905465]: OK Oct 3 14:48:27 nebula1 stonith-ng: [3904]: notice: remote_op_done: Operation off of one by nebula1 for nebula2[a5074eb0-6afa-4060-b3b9-d05e846e0c57]: OK Oct 3 14:48:27 nebula1 ocfs2_controld: Could not kick node 1172809920 from the cluster Oct 3 14:48:27 nebula1 ocfs2_controld: [4180]: info: ais_dispatch_message: Membership 22688: quorum retained Oct 3 14:48:27 nebula1 ocfs2_controld: [4180]: info: crm_update_peer: Node one: id=1172809920 state=lost (new) addr=r(0) ip(192.168.231.69) votes=1 born=22684 seen=22684 proc=00000000000000000000000000000000 Oct 3 14:48:27 nebula1 stonith-ng: [3904]: notice: remote_op_done: Operation off of one by nebula1 for nebula3[1fb319d9-d388-44d4-97a9-212746707e22]: OK Oct 3 14:48:27 nebula1 crmd: [3908]: notice: tengine_stonith_notify: Peer one was terminated (off) by nebula1 for nebula1: OK (ref=e2683312-e06f-44fe-8d65-852a918b7a3c) Oct 3 14:48:27 nebula1 crmd: [3908]: notice: tengine_stonith_notify: Peer one was terminated (off) by nebula1 for nebula2: OK (ref=443f2db0-bb48-4b1f-9179-f64cb587a22c) Oct 3 14:48:27 nebula1 crmd: [3908]: notice: tengine_stonith_notify: Peer one was terminated (off) by nebula1 for nebula3: OK (ref=ed21fc6f-2540-491a-8643-2d0258bf2f60) Oct 3 14:48:27 nebula1 corosync[3674]: [CPG ] chosen downlist: sender r(0) ip(192.168.231.68) ; members(old:5 left:1) Oct 3 14:48:27 nebula1 corosync[3674]: [MAIN ] Completed service synchronization, ready to provide service. Oct 3 14:48:27 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:27 nebula1 stonith-ng: [3904]: info: can_fence_host_with_device: Stonith-ONE-Frontend can fence one: static-list Oct 3 14:48:27 nebula1 stonith-ng: [3904]: info: stonith_fence: Found 1 matching devices for 'one' Oct 3 14:48:27 nebula1 stonith-ng: [3904]: info: stonith_command: Processed st_fence from nebula3: rc=-1 Oct 3 14:48:27 nebula1 external/libvirt[6011]: notice: Domain one is already stopped Oct 3 14:48:29 nebula1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-port nebula one-dmz-pub tag=753 -- set Interface one-dmz-pub "external-ids:attached-mac=\"52:54:00:9e:c8:e3\"" -- set Interface one-dmz-pub "external-ids:iface-id=\"049178a7-e96f-4364-be34-2ead6403347e\"" -- set Interface one-dmz-pub "external-ids:vm-id=\"a8069a7b-97fe-4122-85a3-0abbc011f540\"" -- set Interface one-dmz-pub external-ids:iface-status=active Oct 3 14:48:29 nebula1 kernel: [ 646.790675] device one-dmz-pub entered promiscuous mode Oct 3 14:48:29 nebula1 ovs-vsctl: 00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --may-exist add-port nebula one-admin tag=702 -- set Interface one-admin "external-ids:attached-mac=\"52:54:00:6e:bf:dc\"" -- set Interface one-admin "external-ids:iface-id=\"bbbd86ac-53bc-4f60-8f84-886d9ec20996\"" -- set Interface one-admin "external-ids:vm-id=\"a8069a7b-97fe-4122-85a3-0abbc011f540\"" -- set Interface one-admin external-ids:iface-status=active Oct 3 14:48:29 nebula1 kernel: [ 646.913444] device one-admin entered promiscuous mode Oct 3 14:48:30 nebula1 external/libvirt[6011]: notice: Domain one was started Oct 3 14:48:31 nebula1 stonith-ng: [3904]: notice: log_operation: Operation 'reboot' [6003] (call 0 from 2a9f4455-6b1d-42f7-9330-2a44ff6177f0) for host 'one' with device 'Stonith-ONE-Frontend' returned: 0 Oct 3 14:48:31 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: Performing: stonith -t external/libvirt -T reset one Oct 3 14:48:31 nebula1 stonith-ng: [3904]: info: log_operation: Stonith-ONE-Frontend: success: one 0 Oct 3 14:48:31 nebula1 stonith-ng: [3904]: notice: remote_op_done: Operation reboot of one by nebula1 for nebula3[2a9f4455-6b1d-42f7-9330-2a44ff6177f0]: OK Oct 3 14:48:31 nebula1 crmd: [3908]: notice: tengine_stonith_notify: Peer one was terminated (reboot) by nebula1 for nebula3: OK (ref=23222b1a-8499-4aa0-9964-269cad2a2f9f) Oct 3 14:48:31 nebula1 crmd: [3908]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_JOIN_OFFER cause=C_HA_MESSAGE origin=route_message ] Oct 3 14:48:31 nebula1 crmd: [3908]: info: update_dc: Set DC to nebula3 (3.0.6) Oct 3 14:48:31 nebula1 attrd: [3906]: notice: attrd_local_callback: Sending full refresh (origin=crmd)