Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 657504 Details for
Bug 801355
cman+pacemaker leads to double fences
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
/var/log/messages snippet
messages (text/plain), 13.04 KB, created by
michal novacek
on 2012-12-04 13:25:30 UTC
(
hide
)
Description:
/var/log/messages snippet
Filename:
MIME Type:
Creator:
michal novacek
Created:
2012-12-04 13:25:30 UTC
Size:
13.04 KB
patch
obsolete
> > 1 Dec 4 06:37:47 63-node02 fence_pcmk: Requesting Pacemaker fence 63-node01 (reset) > 2 Dec 4 06:37:47 63-node02 stonith_admin[4457]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root > 3 Dec 4 06:37:47 63-node02 stonith-ng[2676]: info: initiate_remote_stonith_op: Initiating remote operation reboot for 63-node01: a79f7278-a36f-4f5a-8993-52903b95e506 > 4 Dec 4 06:37:47 63-node02 stonith-ng[2676]: info: stonith_command: Processed st_query from 63-node02: rc=0 > 5 Dec 4 06:37:47 63-node02 stonith-ng[2676]: info: call_remote_stonith: Requesting that 63-node03 perform op reboot 63-node01 > 6 Dec 4 06:37:50 63-node02 corosync[2433]: [TOTEM ] A processor failed, forming new configuration. > 7 Dec 4 06:37:55 63-node02 corosync[2433]: [QUORUM] Members[2]: 2 3 > 8 Dec 4 06:37:55 63-node02 corosync[2433]: [TOTEM ] A processor joined or left the membership and a new membership was formed. > 9 Dec 4 06:37:55 63-node02 kernel: dlm: closing connection to node 1 > 10 Dec 4 06:37:55 63-node02 crmd[2680]: info: cman_event_callback: Membership 244: quorum retained > 11 Dec 4 06:37:55 63-node02 crmd[2680]: info: ais_status_callback: status: 63-node01 is now lost (was member) > 12 Dec 4 06:37:55 63-node02 crmd[2680]: info: crm_update_peer: Node 63-node01: id=1 state=lost (new) addr=(null) votes=0 born=240 seen=240 proc=00000000000000000000000000111312 > 13 Dec 4 06:37:55 63-node02 crmd[2680]: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20) > 14 Dec 4 06:37:55 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/558, version=0.11.375): ok (rc=0) > 15 Dec 4 06:37:55 63-node02 corosync[2433]: [CPG ] chosen downlist: sender r(0) ip(192.168.102.2) ; members(old:3 left:1) > 16 Dec 4 06:37:55 63-node02 corosync[2433]: [MAIN ] Completed service synchronization, ready to provide service. > 17 Dec 4 06:37:55 63-node02 fenced[2492]: fencing node 63-node01 > 18 Dec 4 06:37:55 63-node02 stonith-ng[2676]: notice: remote_op_done: Operation reboot of 63-node01 by 63-node03 for 63-node02[f74b5332-4536-4d5e-9331-ac747c4565bf]: OK > 19 Dec 4 06:37:55 63-node02 fence_node[4440]: fence 63-node01 success > 20 Dec 4 06:37:55 63-node02 fence_pcmk: Requesting Pacemaker fence 63-node01 (reset) > 21 Dec 4 06:37:55 63-node02 stonith_admin[4477]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root > 22 Dec 4 06:37:55 63-node02 kernel: type=1400 audit(1354624675.637:65): avc: denied { connectto } for pid=4477 comm="stonith_admin" path="/var/run/crm/st_command" scontext=unconfined_u:system_r:fenced_t:s0 tcontext=unconfined_u:system_r:initrc_t:s0 tclass=unix_stream_socket > 23 Dec 4 06:37:55 63-node02 stonith-ng[2676]: info: initiate_remote_stonith_op: Initiating remote operation reboot for 63-node01: fc6c02bf-9386-4392-b597-e2ad0a63d90f > 24 Dec 4 06:37:55 63-node02 stonith-ng[2676]: info: stonith_command: Processed st_query from 63-node02: rc=0 > 25 Dec 4 06:37:55 63-node02 stonith-ng[2676]: info: call_remote_stonith: Requesting that 63-node03 perform op reboot 63-node01 > 26 Dec 4 06:37:56 63-node02 crmd[2680]: warning: match_down_event: No match for shutdown action on 63-node01 > 27 Dec 4 06:37:56 63-node02 crmd[2680]: info: te_update_diff: Stonith/shutdown of 63-node01 not matched > 28 Dec 4 06:37:56 63-node02 crmd[2680]: info: abort_transition_graph: te_update_diff:234 - Triggered transition abort (complete=1, tag=node_state, id=63-node01, magic=NA, cib=0.11.376) : Node failure > 29 Dec 4 06:37:56 63-node02 crmd[2680]: notice: tengine_stonith_notify: Peer 63-node01 was terminated (reboot) by 63-node03 for 63-node02: OK (ref=a79f7278-a36f-4f5a-8993-52903b95e506) > 30 Dec 4 06:37:56 63-node02 crmd[2680]: notice: tengine_stonith_notify: Notified CMAN that '63-node01' is now fenced > 31 Dec 4 06:37:56 63-node02 crmd[2680]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] > 32 Dec 4 06:37:56 63-node02 pengine[2679]: info: unpack_config: Startup probes: enabled > 33 Dec 4 06:37:56 63-node02 pengine[2679]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 > 34 Dec 4 06:37:56 63-node02 pengine[2679]: info: unpack_domains: Unpacking domains > 35 Dec 4 06:37:56 63-node02 pengine[2679]: info: determine_online_status: Node 63-node03 is online > 36 Dec 4 06:37:56 63-node02 pengine[2679]: info: determine_online_status: Node 63-node02 is online > 37 Dec 4 06:37:56 63-node02 pengine[2679]: warning: pe_fence_node: Node 63-node01 will be fenced because it is un-expectedly down > 38 Dec 4 06:37:56 63-node02 pengine[2679]: info: determine_online_status_fencing: #011ha_state=active, ccm_state=false, crm_state=online, join_state=member, expected=member > 39 Dec 4 06:37:56 63-node02 pengine[2679]: warning: determine_online_status: Node 63-node01 is unclean > 40 Dec 4 06:37:56 63-node02 pengine[2679]: info: native_print: virt-fencing#011(stonith:fence_xvm):#011Started 63-node03 > 41 Dec 4 06:37:56 63-node02 pengine[2679]: warning: stage6: Scheduling Node 63-node01 for STONITH > 42 Dec 4 06:37:56 63-node02 pengine[2679]: info: LogActions: Leave virt-fencing#011(Started 63-node03) > 43 Dec 4 06:37:56 63-node02 crmd[2680]: notice: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] > 44 Dec 4 06:37:56 63-node02 crmd[2680]: info: do_te_invoke: Processing graph 109 (ref=pe_calc-dc-1354624676-377) derived from /var/lib/pengine/pe-warn-18.bz2 > 45 Dec 4 06:37:56 63-node02 crmd[2680]: notice: te_fence_node: Executing reboot fencing operation (9) on 63-node01 (timeout=60000) > 46 Dec 4 06:37:56 63-node02 stonith-ng[2676]: info: initiate_remote_stonith_op: Initiating remote operation reboot for 63-node01: 539be4a5-e5a9-4668-ba2f-b77a6c91b1db > 47 Dec 4 06:37:56 63-node02 stonith-ng[2676]: info: stonith_command: Processed st_query from 63-node02: rc=0 > 48 Dec 4 06:37:56 63-node02 stonith-ng[2676]: info: call_remote_stonith: Requesting that 63-node03 perform op reboot 63-node01 > 49 Dec 4 06:37:56 63-node02 pengine[2679]: warning: process_pe_message: Transition 109: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-18.bz2 > 50 Dec 4 06:37:56 63-node02 pengine[2679]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. > 51 Dec 4 06:37:57 63-node02 stonith-ng[2676]: notice: remote_op_done: Operation reboot of 63-node01 by 63-node03 for 63-node02[136eb94d-51ac-4f2e-b2ca-c6937bd70f1d]: OK > 52 Dec 4 06:37:57 63-node02 crmd[2680]: notice: tengine_stonith_notify: Peer 63-node01 was terminated (reboot) by 63-node03 for 63-node02: OK (ref=fc6c02bf-9386-4392-b597-e2ad0a63d90f) > 53 Dec 4 06:37:57 63-node02 crmd[2680]: notice: tengine_stonith_notify: Notified CMAN that '63-node01' is now fenced > 54 Dec 4 06:37:57 63-node02 fenced[2492]: fence 63-node01 success > 55 Dec 4 06:37:59 63-node02 stonith-ng[2676]: notice: remote_op_done: Operation reboot of 63-node01 by 63-node03 for 63-node02[50d92609-8194-437a-9328-0f378ba3d2a0]: OK > 56 Dec 4 06:37:59 63-node02 crmd[2680]: info: tengine_stonith_callback: StonithOp <st-reply st_origin="stonith_construct_async_reply" t="stonith-ng" st_op="reboot" st_remote_op="539be4a5-e5a9-4668-ba2f-b77a6c91b1db" st_clientid="50d92609-8194-437a-9328-0f378ba3d2a0" st_target="63-node01" st_device_action="st_fence" st_callid="0" st_callopt="0" st_rc="0" st_output="-- args @ 0x7fff4af53ff0 --#012 args->domain = 63-node01#012 args->op = 2#012 args->net.key_file = /etc/cluster/fence_xvm.key#012 args->net.hash = 2#012 args->net.addr = 225.0.0.12#012 args->n > 57 Dec 4 06:37:59 63-node02 crmd[2680]: notice: crmd_peer_update: Status update: Client 63-node01/crmd now has status [offline] (DC=true) > 58 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='63-node01']/lrm (origin=local/crmd/562, version=0.11.378): ok (rc=0) > 59 Dec 4 06:37:59 63-node02 crmd[2680]: notice: tengine_stonith_notify: Peer 63-node01 was terminated (reboot) by 63-node03 for 63-node02: OK (ref=539be4a5-e5a9-4668-ba2f-b77a6c91b1db) > 60 Dec 4 06:37:59 63-node02 crmd[2680]: notice: tengine_stonith_notify: Notified CMAN that '63-node01' is now fenced > 61 Dec 4 06:37:59 63-node02 crmd[2680]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ] > 62 Dec 4 06:37:59 63-node02 crmd[2680]: info: abort_transition_graph: do_te_invoke:169 - Triggered transition abort (complete=0) : Peer Halt > 63 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='63-node01']/transient_attributes (origin=local/crmd/563, version=0.11.379): ok (rc=0) > 64 Dec 4 06:37:59 63-node02 crmd[2680]: info: cib_fencing_updated: Fencing update 561 for 63-node01: complete > 65 Dec 4 06:37:59 63-node02 crmd[2680]: info: abort_transition_graph: te_update_diff:320 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=virt-fencing_last_0, magic=0:7;5:105:7:17a7e886-090c-466c-9b11-39ec87620d31, cib=0.11.378) : Resource op removal > 66 Dec 4 06:37:59 63-node02 crmd[2680]: info: abort_transition_graph: te_update_diff:194 - Triggered transition abort (complete=0, tag=transient_attributes, id=63-node01, magic=NA, cib=0.11.379) : Transient attribute: removal > 67 Dec 4 06:37:59 63-node02 crmd[2680]: notice: run_graph: ==== Transition 109 (Complete=2, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-warn-18.bz2): Stopped > 68 Dec 4 06:37:59 63-node02 crmd[2680]: info: abort_transition_graph: do_te_invoke:169 - Triggered transition abort (complete=1) : Peer Halt > 69 Dec 4 06:37:59 63-node02 crmd[2680]: info: join_make_offer: Making join offers based on membership 244 > 70 Dec 4 06:37:59 63-node02 crmd[2680]: info: do_dc_join_offer_all: join-38: Waiting on 2 outstanding join acks > 71 Dec 4 06:37:59 63-node02 crmd[2680]: info: update_dc: Set DC to 63-node02 (3.0.6) > 72 Dec 4 06:37:59 63-node02 rsyslogd-2177: imuxsock begins to drop messages from pid 2680 due to rate-limiting > 73 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/566, version=0.11.380): ok (rc=0) > 74 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/567, version=0.11.381): ok (rc=0) > 75 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/568, version=0.11.382): ok (rc=0) > 76 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='63-node02']/lrm (origin=local/crmd/569, version=0.11.383): ok (rc=0) > 77 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='63-node03']/lrm (origin=local/crmd/571, version=0.11.385): ok (rc=0) > 78 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/573, version=0.11.387): ok (rc=0) > 79 Dec 4 06:37:59 63-node02 cib[2675]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/575, version=0.11.389): ok (rc=0) > 80 Dec 4 06:37:59 63-node02 pengine[2679]: info: unpack_config: Startup probes: enabled > 81 Dec 4 06:37:59 63-node02 pengine[2679]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 > 82 Dec 4 06:37:59 63-node02 pengine[2679]: info: unpack_domains: Unpacking domains > 83 Dec 4 06:37:59 63-node02 pengine[2679]: info: determine_online_status: Node 63-node03 is online > 84 Dec 4 06:37:59 63-node02 pengine[2679]: info: determine_online_status: Node 63-node02 is online > 85 Dec 4 06:37:59 63-node02 pengine[2679]: info: native_print: virt-fencing#011(stonith:fence_xvm):#011Started 63-node03 > 86 Dec 4 06:37:59 63-node02 pengine[2679]: info: LogActions: Leave virt-fencing#011(Started 63-node03) > 87 Dec 4 06:37:59 63-node02 attrd[2678]: notice: attrd_local_callback: Sending full refresh (origin=crmd) > 88 Dec 4 06:37:59 63-node02 attrd[2678]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) > 89 Dec 4 06:37:59 63-node02 pengine[2679]: notice: process_pe_message: Transition 110: PEngine Input stored in: /var/lib/pengine/pe-input-46.bz2 >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 801355
:
568590
|
654205
| 657504