Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 871960 Details for
Bug 1074024
Stonith does not fence node
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
log snippets from other nodes with running network service
logs.txt (text/plain), 6.75 KB, created by
Miroslav Lisik
on 2014-03-07 16:46:51 UTC
(
hide
)
Description:
log snippets from other nodes with running network service
Filename:
MIME Type:
Creator:
Miroslav Lisik
Created:
2014-03-07 16:46:51 UTC
Size:
6.75 KB
patch
obsolete
>LOG FROM 1st machine $NODE1 >=========================== >=========================== > Mar 7 16:56:20 virt-066 corosync[26733]: [TOTEM ] A processor failed, forming new configuration. >Mar 7 16:56:21 virt-066 corosync[26733]: [TOTEM ] A new membership (10.34.71.66:88) was formed. Members left: 2 >Mar 7 16:56:21 virt-066 crmd[26754]: notice: peer_update_callback: Our peer on the DC is dead >Mar 7 16:56:21 virt-066 crmd[26754]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ] >Mar 7 16:56:21 virt-066 corosync[26733]: [QUORUM] Members[2]: 1 3 >Mar 7 16:56:21 virt-066 crmd[26754]: notice: crm_update_peer_state: pcmk_quorum_notification: Node virt-067.cluster-qe.lab.eng.brq.redhat.com[2] - state is now lost (was member) >Mar 7 16:56:21 virt-066 pacemakerd[26748]: notice: crm_update_peer_state: pcmk_quorum_notification: Node virt-067.cluster-qe.lab.eng.brq.redhat.com[2] - state is now lost (was member) >Mar 7 16:56:21 virt-066 corosync[26733]: [MAIN ] Completed service synchronization, ready to provide service. >Mar 7 16:56:21 virt-066 crmd[26754]: notice: do_state_transition: State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ] >Mar 7 16:56:22 virt-066 attrd[26752]: notice: attrd_local_callback: Sending full refresh (origin=crmd) >Mar 7 16:56:22 virt-066 attrd[26752]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) >Mar 7 16:56:22 virt-066 crmd[26754]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] >Mar 7 16:56:28 virt-066 stonith-ng[26750]: notice: dynamic_list_search_cb: Disabling port list queries for xvm3 (255): Failed to listen: Address already in use >Unknown response (-1) >Mar 7 16:56:28 virt-066 stonith-ng[26750]: notice: remote_op_done: Operation reboot of virt-067.cluster-qe.lab.eng.brq.redhat.com by virt-068.cluster-qe.lab.eng.brq.redhat.com for crmd.3402@virt-068.cluster-qe.lab.eng.brq.redhat.com.52842aaa: No such device >Mar 7 16:56:28 virt-066 crmd[26754]: notice: tengine_stonith_notify: Peer virt-067.cluster-qe.lab.eng.brq.redhat.com was not terminated (reboot) by virt-068.cluster-qe.lab.eng.brq.redhat.com for virt-068.cluster-qe.lab.eng.brq.redhat.com: No such device (ref=52842aaa-b961-4e3e-b9b1-6bfb89f5abc8) by client crmd.3402 > > >LOG FROM 3rd machine $NODE3 >=========================== >=========================== > >Mar 7 16:56:20 virt-068 corosync[3381]: [TOTEM ] A processor failed, forming new configuration. >Mar 7 16:56:21 virt-068 corosync[3381]: [TOTEM ] A new membership (10.34.71.66:88) was formed. Members left: 2 >Mar 7 16:56:21 virt-068 crmd[3402]: notice: peer_update_callback: Our peer on the DC is dead >Mar 7 16:56:21 virt-068 crmd[3402]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ] >Mar 7 16:56:21 virt-068 corosync[3381]: [QUORUM] Members[2]: 1 3 >Mar 7 16:56:21 virt-068 crmd[3402]: notice: crm_update_peer_state: pcmk_quorum_notification: Node virt-067.cluster-qe.lab.eng.brq.redhat.com[2] - state is now lost (was member) >Mar 7 16:56:21 virt-068 pacemakerd[3396]: notice: crm_update_peer_state: pcmk_quorum_notification: Node virt-067.cluster-qe.lab.eng.brq.redhat.com[2] - state is now lost (was member) >Mar 7 16:56:21 virt-068 corosync[3381]: [MAIN ] Completed service synchronization, ready to provide service. >Mar 7 16:56:21 virt-068 crmd[3402]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] >Mar 7 16:56:22 virt-068 attrd[3400]: notice: attrd_local_callback: Sending full refresh (origin=crmd) >Mar 7 16:56:22 virt-068 attrd[3400]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) >Mar 7 16:56:23 virt-068 pengine[3401]: warning: pe_fence_node: Node virt-067.cluster-qe.lab.eng.brq.redhat.com will be fenced because xvm2 is thought to be active there >Mar 7 16:56:23 virt-068 pengine[3401]: warning: custom_action: Action xvm2_stop_0 on virt-067.cluster-qe.lab.eng.brq.redhat.com is unrunnable (offline) >Mar 7 16:56:23 virt-068 pengine[3401]: warning: stage6: Scheduling Node virt-067.cluster-qe.lab.eng.brq.redhat.com for STONITH >Mar 7 16:56:23 virt-068 pengine[3401]: notice: LogActions: Move xvm2 (Started virt-067.cluster-qe.lab.eng.brq.redhat.com -> virt-068.cluster-qe.lab.eng.brq.redhat.com) >Mar 7 16:56:23 virt-068 crmd[3402]: notice: te_fence_node: Executing reboot fencing operation (15) on virt-067.cluster-qe.lab.eng.brq.redhat.com (timeout=60000) >Mar 7 16:56:23 virt-068 stonith-ng[3398]: notice: handle_request: Client crmd.3402.e36baadb wants to fence (reboot) 'virt-067.cluster-qe.lab.eng.brq.redhat.com' with device '(any)' >Mar 7 16:56:23 virt-068 stonith-ng[3398]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for virt-067.cluster-qe.lab.eng.brq.redhat.com: 52842aaa-b961-4e3e-b9b1-6bfb89f5abc8 (0) >Mar 7 16:56:23 virt-068 pengine[3401]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-1.bz2 >Mar 7 16:56:28 virt-068 stonith-ng[3398]: notice: dynamic_list_search_cb: Disabling port list queries for xvm3 (255): Failed to listen: Address already in use >Unknown response (-1) >Mar 7 16:56:28 virt-068 stonith-ng[3398]: error: remote_op_done: Operation reboot of virt-067.cluster-qe.lab.eng.brq.redhat.com by virt-068.cluster-qe.lab.eng.brq.redhat.com for crmd.3402@virt-068.cluster-qe.lab.eng.brq.redhat.com.52842aaa: No such device >Mar 7 16:56:28 virt-068 crmd[3402]: notice: tengine_stonith_callback: Stonith operation 2/15:0:0:37b25322-342a-426f-8407-e95e03d297e8: No such device (-19) >Mar 7 16:56:28 virt-068 crmd[3402]: notice: tengine_stonith_callback: Stonith operation 2 for virt-067.cluster-qe.lab.eng.brq.redhat.com failed (No such device): aborting transition. >Mar 7 16:56:28 virt-068 crmd[3402]: notice: tengine_stonith_notify: Peer virt-067.cluster-qe.lab.eng.brq.redhat.com was not terminated (reboot) by virt-068.cluster-qe.lab.eng.brq.redhat.com for virt-068.cluster-qe.lab.eng.brq.redhat.com: No such device (ref=52842aaa-b961-4e3e-b9b1-6bfb89f5abc8) by client crmd.3402 >Mar 7 16:56:28 virt-068 crmd[3402]: notice: run_graph: Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-1.bz2): Stopped >Mar 7 16:56:28 virt-068 crmd[3402]: notice: too_many_st_failures: No devices found in cluster to fence virt-067.cluster-qe.lab.eng.brq.redhat.com, giving up >Mar 7 16:56:28 virt-068 crmd[3402]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] > >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1074024
:
871959
| 871960