Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 903016 Details for
Bug 1104899
Starting clvmd hangs with incorrect firewall rules
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
syslog with kernel/dlm debugging enabled, an-a04n02
messages.an-a04n02.kernel-debugging (text/plain), 10.03 KB, created by
Madison Kelly
on 2014-06-06 19:22:38 UTC
(
hide
)
Description:
syslog with kernel/dlm debugging enabled, an-a04n02
Filename:
MIME Type:
Creator:
Madison Kelly
Created:
2014-06-06 19:22:38 UTC
Size:
10.03 KB
patch
obsolete
>Jun 6 15:12:33 an-a04n02 ntpd[2497]: 0.0.0.0 c612 02 freq_set kernel 24.564 PPM >Jun 6 15:12:33 an-a04n02 ntpd[2497]: 0.0.0.0 c615 05 clock_sync >Jun 6 15:13:25 an-a04n02 kernel: DLM (built Apr 11 2014 17:28:07) installed >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service. >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Corosync built-in features: nss dbus rdma snmp >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Successfully parsed cman config >Jun 6 15:13:27 an-a04n02 corosync[3876]: [TOTEM ] Initializing transport (UDP/IP Multicast). >Jun 6 15:13:27 an-a04n02 corosync[3876]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). >Jun 6 15:13:27 an-a04n02 corosync[3876]: [TOTEM ] The network interface [10.20.40.2] is now up. >Jun 6 15:13:27 an-a04n02 corosync[3876]: [QUORUM] Using quorum provider quorum_cman >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [CMAN ] CMAN 3.0.12.1 (built Apr 3 2014 05:12:26) started >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync CMAN membership service 2.90 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: openais checkpoint service B.01.01 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync extended virtual synchrony service >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync configuration service >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync cluster config database access v1.01 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync profile loading service >Jun 6 15:13:27 an-a04n02 corosync[3876]: [QUORUM] Using quorum provider quorum_cman >Jun 6 15:13:27 an-a04n02 corosync[3876]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine. >Jun 6 15:13:27 an-a04n02 corosync[3876]: [TOTEM ] A processor joined or left the membership and a new membership was formed. >Jun 6 15:13:27 an-a04n02 corosync[3876]: [CMAN ] quorum regained, resuming activity >Jun 6 15:13:27 an-a04n02 corosync[3876]: [QUORUM] This node is within the primary component and will provide service. >Jun 6 15:13:27 an-a04n02 corosync[3876]: [QUORUM] Members[1]: 2 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [QUORUM] Members[1]: 2 >Jun 6 15:13:27 an-a04n02 corosync[3876]: [CPG ] chosen downlist: sender r(0) ip(10.20.40.2) ; members(old:0 left:0) >Jun 6 15:13:27 an-a04n02 corosync[3876]: [MAIN ] Completed service synchronization, ready to provide service. >Jun 6 15:13:28 an-a04n02 corosync[3876]: [TOTEM ] A processor joined or left the membership and a new membership was formed. >Jun 6 15:13:28 an-a04n02 corosync[3876]: [QUORUM] Members[2]: 1 2 >Jun 6 15:13:28 an-a04n02 corosync[3876]: [QUORUM] Members[2]: 1 2 >Jun 6 15:13:28 an-a04n02 corosync[3876]: [CPG ] chosen downlist: sender r(0) ip(10.20.40.2) ; members(old:1 left:0) >Jun 6 15:13:28 an-a04n02 corosync[3876]: [MAIN ] Completed service synchronization, ready to provide service. >Jun 6 15:13:31 an-a04n02 fenced[3931]: fenced 3.0.12.1 started >Jun 6 15:13:31 an-a04n02 dlm_controld[3955]: dlm_controld 3.0.12.1 started >Jun 6 15:13:32 an-a04n02 gfs_controld[4002]: gfs_controld 3.0.12.1 started >Jun 6 15:13:34 an-a04n02 pacemaker: Starting Pacemaker Cluster Manager >Jun 6 15:13:34 an-a04n02 pacemakerd[4090]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 pacemakerd[4090]: notice: main: Starting Pacemaker 1.1.10-14.el6_5.3 (Build: 368c726): generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman >Jun 6 15:13:34 an-a04n02 cib[4096]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 attrd[4099]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 attrd[4099]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman >Jun 6 15:13:34 an-a04n02 attrd[4099]: notice: main: Starting mainloop... >Jun 6 15:13:34 an-a04n02 pengine[4100]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 stonith-ng[4097]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 stonith-ng[4097]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman >Jun 6 15:13:34 an-a04n02 lrmd[4098]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 crmd[4101]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log >Jun 6 15:13:34 an-a04n02 crmd[4101]: notice: main: CRM Git Version: 368c726 >Jun 6 15:13:34 an-a04n02 cib[4096]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman >Jun 6 15:13:35 an-a04n02 stonith-ng[4097]: notice: setup_cib: Watching for stonith topology changes >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: cman_event_callback: Membership 140: quorum acquired >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: crm_update_peer_state: cman_event_callback: Node an-a04n01.alteeve.ca[1] - state is now member (was (null)) >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: crm_update_peer_state: cman_event_callback: Node an-a04n02.alteeve.ca[2] - state is now member (was (null)) >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: do_started: The local CRM is operational >Jun 6 15:13:35 an-a04n02 crmd[4101]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] >Jun 6 15:13:36 an-a04n02 stonith-ng[4097]: notice: stonith_device_register: Added 'fence_n01_ipmi' to the device list (1 active devices) >Jun 6 15:13:37 an-a04n02 stonith-ng[4097]: notice: stonith_device_register: Added 'fence_n02_ipmi' to the device list (2 active devices) >Jun 6 15:13:56 an-a04n02 crmd[4101]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING >Jun 6 15:13:56 an-a04n02 crmd[4101]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] >Jun 6 15:13:56 an-a04n02 attrd[4099]: notice: attrd_local_callback: Sending full refresh (origin=crmd) >Jun 6 15:13:56 an-a04n02 pengine[4100]: notice: LogActions: Start fence_n01_ipmi#011(an-a04n01.alteeve.ca) >Jun 6 15:13:56 an-a04n02 pengine[4100]: notice: LogActions: Start fence_n02_ipmi#011(an-a04n02.alteeve.ca) >Jun 6 15:13:56 an-a04n02 pengine[4100]: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-11.bz2 >Jun 6 15:13:56 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 7: monitor fence_n01_ipmi_monitor_0 on an-a04n02.alteeve.ca (local) >Jun 6 15:13:56 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 4: monitor fence_n01_ipmi_monitor_0 on an-a04n01.alteeve.ca >Jun 6 15:13:56 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 8: monitor fence_n02_ipmi_monitor_0 on an-a04n02.alteeve.ca (local) >Jun 6 15:13:56 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 5: monitor fence_n02_ipmi_monitor_0 on an-a04n01.alteeve.ca >Jun 6 15:13:57 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 6: probe_complete probe_complete on an-a04n02.alteeve.ca (local) - no waiting >Jun 6 15:13:57 an-a04n02 attrd[4099]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) >Jun 6 15:13:57 an-a04n02 attrd[4099]: notice: attrd_perform_update: Sent update 4: probe_complete=true >Jun 6 15:13:57 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on an-a04n01.alteeve.ca - no waiting >Jun 6 15:13:57 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 9: start fence_n01_ipmi_start_0 on an-a04n01.alteeve.ca >Jun 6 15:13:57 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 11: start fence_n02_ipmi_start_0 on an-a04n02.alteeve.ca (local) >Jun 6 15:13:58 an-a04n02 stonith-ng[4097]: notice: stonith_device_register: Device 'fence_n02_ipmi' already existed in device list (2 active devices) >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: process_lrm_event: LRM operation fence_n02_ipmi_start_0 (call=13, rc=0, cib-update=28, confirmed=true) ok >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 12: monitor fence_n02_ipmi_monitor_60000 on an-a04n02.alteeve.ca (local) >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: te_rsc_command: Initiating action 10: monitor fence_n01_ipmi_monitor_60000 on an-a04n01.alteeve.ca >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: process_lrm_event: LRM operation fence_n02_ipmi_monitor_60000 (call=16, rc=0, cib-update=29, confirmed=false) ok >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: run_graph: Transition 0 (Complete=11, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-11.bz2): Complete >Jun 6 15:13:58 an-a04n02 crmd[4101]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] >Jun 6 15:17:08 an-a04n02 kernel: dlm: Using TCP for communications >Jun 6 15:17:09 an-a04n02 clvmd: Cluster LVM daemon started - connected to CMAN >Jun 6 15:17:09 an-a04n02 kernel: dlm: connecting to 1
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1104899
:
902376
|
902377
|
903015
| 903016 |
903017
|
903018