Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1470004 Details for
Bug 1607530
Odd 'monitor' op connection errors in fence_zvmip
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
corosync.log snippet of the failure event
fence_zvmip_failure.log (text/plain), 36.50 KB, created by
Andrew Price
on 2018-07-23 16:26:10 UTC
(
hide
)
Description:
corosync.log snippet of the failure event
Filename:
MIME Type:
Creator:
Andrew Price
Created:
2018-07-23 16:26:10 UTC
Size:
36.50 KB
patch
obsolete
>Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: warning: log_action: fence_zvmip[29579] stderr: [ 2018-07-22 23:38:51,496 ERROR: Unable to connect/login to fencing device ] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: warning: log_action: fence_zvmip[29579] stderr: [ ] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: warning: log_action: fence_zvmip[29579] stderr: [ ] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: notice: log_operation: Operation 'monitor' [29579] for device 's390_ssi-fence1' returned: -201 (Generic Pacemaker error) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: error: process_lrm_event: Result of monitor operation for s390_ssi-fence1 on qe-c02-m01: Error | call=26 key=s390_ssi-fence1_monitor_60000 confirmed=false status=4 cib-update=344 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/344) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.8 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.9 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=9 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']: <lrm_rsc_op id="s390_ssi-fence1_last_failure_0" operation_key="s390_ssi-fence1_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.14" transition-key="13:1:0:c15bea02-b55d-427c-83f7-85b333561aa2" transition-magic="4:1;13:1:0:c15bea02-b55d-427c-83f7-85b333561aa2" exit-reason="" >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/344, version=0.24.9) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: create lrm_resource[@id='s390_ssi-fence1'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.9 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: abort_transition_graph: Transition aborted by operation s390_ssi-fence1_monitor_60000 'create' on qe-c02-m01: Old event | magic=4:1;13:1:0:c15bea02-b55d-427c-83f7-85b333561aa2 cib=0.24.9 source=process_graph_event:499 complete=true >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: update_failcount: Updating failcount for s390_ssi-fence1 on qe-c02-m01 after failed monitor: rc=1 (update=value++, time=1532317131) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: update_attrd_helper: Connecting to attribute manager ... 5 retries remaining >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: process_graph_event: Detected action (1.13) s390_ssi-fence1_monitor_60000.26=unknown error: failed >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_client_update: Expanded fail-count-s390_ssi-fence1#monitor_60000=value++ to 1 >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_peer_update: Setting fail-count-s390_ssi-fence1#monitor_60000[qe-c02-m01]: (null) -> 1 from qe-c02-m01 >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: write_attribute: Sent update 4 with 1 changes for fail-count-s390_ssi-fence1#monitor_60000, id=<n/a>, set=(null) >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_peer_update: Setting last-failure-s390_ssi-fence1#monitor_60000[qe-c02-m01]: (null) -> 1532317131 from qe-c02-m01 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/attrd/4) >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: write_attribute: Sent update 5 with 1 changes for last-failure-s390_ssi-fence1#monitor_60000, id=<n/a>, set=(null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/attrd/5) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.9 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.10 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=10 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']: <transient_attributes id="1"/> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ <instance_attributes id="status-1"> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ <nvpair id="status-1-fail-count-s390_ssi-fence1.monitor_60000" name="fail-count-s390_ssi-fence1#monitor_60000" value="1"/> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ </instance_attributes> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ </transient_attributes> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/attrd/4, version=0.24.10) >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_cib_callback: Update 4 for fail-count-s390_ssi-fence1#monitor_60000: OK (0) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.10 2 >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_cib_callback: Update 4 for fail-count-s390_ssi-fence1#monitor_60000[qe-c02-m01]=1: OK (0) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.11 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=11 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']: <nvpair id="status-1-last-failure-s390_ssi-fence1.monitor_60000" name="last-failure-s390_ssi-fence1#monitor_60000" value="1532317131"/> >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/attrd/5, version=0.24.11) >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_cib_callback: Update 5 for last-failure-s390_ssi-fence1#monitor_60000: OK (0) >Jul 22 23:38:51 [2581] qe-c02-m01.s390.bos.redhat.com attrd: info: attrd_cib_callback: Update 5 for last-failure-s390_ssi-fence1#monitor_60000[qe-c02-m01]=1532317131: OK (0) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: abort_transition_graph: Transition aborted by transient_attributes.1 'create': Transient attribute change | cib=0.24.10 source=abort_unless_down:341 path=/cib/status/node_state[@id='1'] complete=true >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: abort_transition_graph: Transition aborted by status-1-last-failure-s390_ssi-fence1.monitor_60000 doing create last-failure-s390_ssi-fence1#monitor_60000=1532317131: Transient attribute change | cib=0.24.11 source=abort_unless_down:341 path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'] complete=true >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m01 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m01 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m02 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m02 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m03 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m03 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: warning: unpack_rsc_op_failure: Processing failed monitor of s390_ssi-fence1 on qe-c02-m01: unknown error | rc=1 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence1 (stonith:fence_zvmip): FAILED qe-c02-m01 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence2 (stonith:fence_zvmip): Started qe-c02-m02 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: dlm-clone [dlm] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clvmd-clone [clvmd] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clusterfs-clone [clusterfs] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: RecurringOp: Start recurring monitor (60s) for s390_ssi-fence1 on qe-c02-m01 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: notice: LogAction: * Recover s390_ssi-fence1 ( qe-c02-m01 ) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave s390_ssi-fence2 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: notice: process_pe_message: Calculated transition 240, saving inputs in /var/lib/pacemaker/pengine/pe-input-955.bz2 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: handle_response: pe_calc calculation pe_calc-dc-1532317131-310 is obsolete >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m01 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m01 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m02 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m02 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m03 is active >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m03 is online >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: warning: unpack_rsc_op_failure: Processing failed monitor of s390_ssi-fence1 on qe-c02-m01: unknown error | rc=1 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence1 (stonith:fence_zvmip): FAILED qe-c02-m01 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence2 (stonith:fence_zvmip): Started qe-c02-m02 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: dlm-clone [dlm] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clvmd-clone [clvmd] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clusterfs-clone [clusterfs] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: pe_get_failcount: s390_ssi-fence1 has failed 1 times on qe-c02-m01 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: check_migration_threshold: s390_ssi-fence1 can fail 999999 more times on qe-c02-m01 before being forced off >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: RecurringOp: Start recurring monitor (60s) for s390_ssi-fence1 on qe-c02-m01 >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: notice: LogAction: * Recover s390_ssi-fence1 ( qe-c02-m01 ) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave s390_ssi-fence2 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:0 (Started qe-c02-m01) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:1 (Started qe-c02-m02) >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:2 (Started qe-c02-m03) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response >Jul 22 23:38:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: notice: process_pe_message: Calculated transition 241, saving inputs in /var/lib/pacemaker/pengine/pe-input-956.bz2 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_te_invoke: Processing graph 241 (ref=pe_calc-dc-1532317131-311) derived from /var/lib/pacemaker/pengine/pe-input-956.bz2 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: te_rsc_command: Initiating stop operation s390_ssi-fence1_stop_0 locally on qe-c02-m01 | action 2 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_lrm_rsc_op: Performing key=2:241:0:c15bea02-b55d-427c-83f7-85b333561aa2 op=s390_ssi-fence1_stop_0 >Jul 22 23:38:51 [2580] qe-c02-m01.s390.bos.redhat.com lrmd: info: log_execute: executing - rsc:s390_ssi-fence1 action:stop call_id:34 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: process_lrm_event: Result of monitor operation for s390_ssi-fence1 on qe-c02-m01: Cancelled | call=26 key=s390_ssi-fence1_monitor_60000 confirmed=true >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/348) >Jul 22 23:38:51 [2580] qe-c02-m01.s390.bos.redhat.com lrmd: info: log_finished: finished - rsc:s390_ssi-fence1 action:stop call_id:34 exit-code:0 exec-time:1ms queue-time:0ms >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: process_lrm_event: Result of stop operation for s390_ssi-fence1 on qe-c02-m01: 0 (ok) | call=34 key=s390_ssi-fence1_stop_0 confirmed=true cib-update=349 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.11 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.12 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=12 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_last_0']: @operation_key=s390_ssi-fence1_stop_0, @operation=stop, @transition-key=2:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @transition-magic=-1:193;2:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=-1, @rc-code=193, @op-status=-1, @last-run=1532317131, @last-rc-change=153231713 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/348, version=0.24.12) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/349) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_last_0'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.12 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.12 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.13 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=13 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_last_0']: @transition-magic=0:0;2:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=34, @rc-code=0, @op-status=0, @exec-time=1 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/349, version=0.24.13) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_last_0'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.13 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: match_graph_event: Action s390_ssi-fence1_stop_0 (2) confirmed on qe-c02-m01 (rc=0) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: te_rsc_command: Initiating start operation s390_ssi-fence1_start_0 locally on qe-c02-m01 | action 14 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_lrm_rsc_op: Performing key=14:241:0:c15bea02-b55d-427c-83f7-85b333561aa2 op=s390_ssi-fence1_start_0 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/350) >Jul 22 23:38:51 [2580] qe-c02-m01.s390.bos.redhat.com lrmd: info: log_execute: executing - rsc:s390_ssi-fence1 action:start call_id:35 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.13 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.14 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=14 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_last_0']: @operation_key=s390_ssi-fence1_start_0, @operation=start, @transition-key=14:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @transition-magic=-1:193;14:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=-1, @rc-code=193, @op-status=-1, @exec-time=0 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/350, version=0.24.14) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_last_0'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.14 >Jul 22 23:38:51 [2580] qe-c02-m01.s390.bos.redhat.com lrmd: info: log_finished: finished - rsc:s390_ssi-fence1 action:start call_id:35 exit-code:0 exec-time:331ms queue-time:0ms >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: process_lrm_event: Result of start operation for s390_ssi-fence1 on qe-c02-m01: 0 (ok) | call=35 key=s390_ssi-fence1_start_0 confirmed=true cib-update=351 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/351) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.14 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.15 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=15 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_last_0']: @transition-magic=0:0;14:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=35, @rc-code=0, @op-status=0, @exec-time=331 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/351, version=0.24.15) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: match_graph_event: Action s390_ssi-fence1_start_0 (14) confirmed on qe-c02-m01 (rc=0) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: te_rsc_command: Initiating monitor operation s390_ssi-fence1_monitor_60000 locally on qe-c02-m01 | action 1 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_lrm_rsc_op: Performing key=1:241:0:c15bea02-b55d-427c-83f7-85b333561aa2 op=s390_ssi-fence1_monitor_60000 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/352) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.15 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.16 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=16 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_monitor_60000']: @transition-key=1:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @transition-magic=-1:193;1:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=-1, @rc-code=193, @op-status=-1, @last-rc-change=1532317131, @exec-time=0 >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_last_0'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.15 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/352, version=0.24.16) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_monitor_60000'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.16 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: process_lrm_event: Result of monitor operation for s390_ssi-fence1 on qe-c02-m01: 0 (ok) | call=36 key=s390_ssi-fence1_monitor_60000 confirmed=false cib-update=353 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/353) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: --- 0.24.16 2 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: Diff: +++ 0.24.17 (null) >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib: @num_updates=17 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='s390_ssi-fence1']/lrm_rsc_op[@id='s390_ssi-fence1_monitor_60000']: @transition-magic=0:0;1:241:0:c15bea02-b55d-427c-83f7-85b333561aa2, @call-id=36, @rc-code=0, @op-status=0, @exec-time=99, @queue-time=1 >Jul 22 23:38:51 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=qe-c02-m01/crmd/353, version=0.24.17) >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: update_cib_stonith_devices_v2: Updating device list from the cib: modify lrm_rsc_op[@id='s390_ssi-fence1_monitor_60000'] >Jul 22 23:38:51 [2579] qe-c02-m01.s390.bos.redhat.com stonith-ng: info: cib_devices_update: Updating devices to version 0.24.17 >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: match_graph_event: Action s390_ssi-fence1_monitor_60000 (1) confirmed on qe-c02-m01 (rc=0) >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: run_graph: Transition 241 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-956.bz2): Complete >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_log: Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd >Jul 22 23:38:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd >Jul 22 23:38:56 [2578] qe-c02-m01.s390.bos.redhat.com cib: info: cib_process_ping: Reporting our current digest to qe-c02-m01: 565ea55674aaab2c2b20d5111ae0f633 for 0.24.17 (0x2aa20a60340 0) >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped (900000ms) >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m01 is active >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m01 is online >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m02 is active >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m02 is online >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status_fencing: Node qe-c02-m03 is active >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: determine_online_status: Node qe-c02-m03 is online >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: warning: unpack_rsc_op_failure: Processing failed monitor of s390_ssi-fence1 on qe-c02-m01: unknown error | rc=1 >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 1 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 2 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: unpack_node_loop: Node 3 is already processed >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence1 (stonith:fence_zvmip): Started qe-c02-m01 >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: common_print: s390_ssi-fence2 (stonith:fence_zvmip): Started qe-c02-m02 >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: dlm-clone [dlm] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clvmd-clone [clvmd] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: clone_print: Clone Set: clusterfs-clone [clusterfs] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: short_print: Started: [ qe-c02-m01 qe-c02-m02 qe-c02-m03 ] >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: pe_get_failcount: s390_ssi-fence1 has failed 1 times on qe-c02-m01 >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: check_migration_threshold: s390_ssi-fence1 can fail 999999 more times on qe-c02-m01 before being forced off >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave s390_ssi-fence1 (Started qe-c02-m01) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave s390_ssi-fence2 (Started qe-c02-m02) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:0 (Started qe-c02-m01) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:1 (Started qe-c02-m02) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave dlm:2 (Started qe-c02-m03) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:0 (Started qe-c02-m01) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:1 (Started qe-c02-m02) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clvmd:2 (Started qe-c02-m03) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:0 (Started qe-c02-m01) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:1 (Started qe-c02-m02) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: info: LogActions: Leave clusterfs:2 (Started qe-c02-m03) >Jul 22 23:53:51 [2582] qe-c02-m01.s390.bos.redhat.com pengine: notice: process_pe_message: Calculated transition 242, saving inputs in /var/lib/pacemaker/pengine/pe-input-957.bz2 >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_te_invoke: Processing graph 242 (ref=pe_calc-dc-1532318031-315) derived from /var/lib/pacemaker/pengine/pe-input-957.bz2 >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: run_graph: Transition 242 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-957.bz2): Complete >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: info: do_log: Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd >Jul 22 23:53:51 [2583] qe-c02-m01.s390.bos.redhat.com crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1607530
: 1470004