Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 294667 Details for
Bug 432109
RHEL5 cmirror tracker: cpg_mcast_joined error while mirror is suspended
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
log from taft-04
taft-04.log (text/plain), 6.53 KB, created by
Corey Marthaler
on 2008-02-12 15:29:15 UTC
(
hide
)
Description:
log from taft-04
Filename:
MIME Type:
Creator:
Corey Marthaler
Created:
2008-02-12 15:29:15 UTC
Size:
6.53 KB
patch
obsolete
>Feb 11 21:16:26 taft-04 qarshd[15232]: Running cmdline: echo offline > /sys/block/sde/device/state >Feb 11 21:16:26 taft-04 xinetd[6270]: EXIT: qarsh status=0 pid=15232 duration=0(sec) >Feb 11 21:16:27 taft-04 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_FLUSH]: -5 >Feb 11 21:16:27 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_1 is now in-sync >Feb 11 21:16:27 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_1 is now in-sync >Feb 11 21:16:27 taft-04 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_FLUSH]: -5 >Feb 11 21:16:27 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_2 is now in-sync >Feb 11 21:16:27 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_2 is now in-sync >Feb 11 21:16:31 taft-04 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_FLUSH]: -5 >Feb 11 21:16:33 taft-04 kernel: sd 1:0:0:4: rejecting I/O to offline device >Feb 11 21:16:33 taft-04 lvm[7105]: No longer monitoring mirror device helter_skelter-syncd_log_2legs_1 for events >Feb 11 21:16:48 taft-04 lvm[7105]: Monitoring mirror device helter_skelter-syncd_log_2legs_1 for events >Feb 11 21:16:49 taft-04 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_FLUSH]: -5 >Feb 11 21:16:49 taft-04 kernel: device-mapper: dm-log-clustered: Server error while processing request [DM_CLOG_FLUSH]: -5 >Feb 11 21:17:04 taft-04 lvm[7105]: No longer monitoring mirror device helter_skelter-syncd_log_2legs_2 for events >Feb 11 21:17:13 taft-04 clogd[6785]: [TyB6n0g5] No match for cluster response: DM_CLOG_IS_REMOTE_RECOVERING:2412724 >Feb 11 21:17:13 taft-04 clogd[6785]: Current list: >Feb 11 21:17:13 taft-04 clogd[6785]: [none] >Feb 11 21:17:13 taft-04 clogd[6785]: [TyB6n0g5] Error while processing CPG message, DM_CLOG_IS_REMOTE_RECOVERING: Invalid argument >Feb 11 21:17:13 taft-04 clogd[6785]: [TyB6n0g5] Response : YES >Feb 11 21:17:13 taft-04 clogd[6785]: [TyB6n0g5] Originator: 4 >Feb 11 21:17:13 taft-04 clogd[6785]: [TyB6n0g5] Responder : 1 >Feb 11 21:17:13 taft-04 clogd[6785]: HISTORY:: >Feb 11 21:17:13 taft-04 clogd[6785]: 0:18) SEQ#=2412718, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 1:19) SEQ#=2412717, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 2:0) SEQ#=2412719, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 3:1) SEQ#=2412718, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 4:2) SEQ#=2190957, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 5:3) SEQ#=2190958, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 6:4) SEQ#=2412720, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 7:5) SEQ#=2412719, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 8:6) SEQ#=2412721, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 9:7) SEQ#=2412720, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 10:8) SEQ#=2190959, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 11:9) SEQ#=2190960, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 12:10) SEQ#=2412722, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 13:11) SEQ#=2412721, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 14:12) SEQ#=2412723, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 15:13) SEQ#=2412722, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:13 taft-04 clogd[6785]: 16:14) SEQ#=2190961, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 17:15) SEQ#=2190962, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=1, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 18:16) SEQ#=2412724, UUID=TyB6n0g5, TYPE=DM_CLOG_IS_REMOTE_RECOVERING, ORIG=4, RESP=NO >Feb 11 21:17:13 taft-04 clogd[6785]: 19:17) SEQ#=2412723, UUID=jiKwezRI, TYPE=DM_CLOG_SET_REGION_SYNC, ORIG=4, RESP=YES, RSPR=1 >Feb 11 21:17:19 taft-04 lvm[7105]: Monitoring mirror device helter_skelter-syncd_log_2legs_2 for events >Feb 11 21:17:28 taft-04 kernel: device-mapper: dm-log-clustered: Request timed out on DM_CLOG_IS_REMOTE_RECOVERING:2412724 - retrying >Feb 11 21:17:35 taft-04 clogd[6785]: cpg_mcast_joined error: 9 >Feb 11 21:17:35 taft-04 clogd[6785]: cluster_send failed at: local.c:212 (do_local_work) >Feb 11 21:17:35 taft-04 clogd[6785]: [] Unable to send (null) to cluster: Invalid exchange >Feb 11 21:17:35 taft-04 clogd[6785]: Bad callback on local/4 >Feb 11 21:17:35 taft-04 kernel: device-mapper: dm-log-clustered: Stray request returned: <NULL>, 0 >Feb 11 21:17:50 taft-04 kernel: device-mapper: dm-log-clustered: Request timed out on DM_CLOG_CLEAR_REGION:2414554 - retrying >Feb 11 21:19:37 taft-04 clogd[6785]: cpg_mcast_joined error: 9 >Feb 11 21:19:37 taft-04 clogd[6785]: cluster_send failed at: local.c:212 (do_local_work) >Feb 11 21:19:37 taft-04 clogd[6785]: [] Unable to send (null) to cluster: Invalid exchange >Feb 11 21:19:37 taft-04 clogd[6785]: Bad callback on local/4 >Feb 11 21:19:37 taft-04 kernel: device-mapper: dm-log-clustered: Stray request returned: <NULL>, 0 >Feb 11 21:19:52 taft-04 kernel: device-mapper: dm-log-clustered: Request timed out on DM_CLOG_IS_REMOTE_RECOVERING:2452446 - retrying >Feb 11 21:21:17 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_2 is now in-sync >Feb 11 21:22:19 taft-04 qarshd[15222]: Sending child 15224 signal 2 >Feb 11 21:22:19 taft-04 qarshd[15221]: Sending child 15223 signal 2 >Feb 11 21:22:19 taft-04 xinetd[6270]: EXIT: qarsh status=0 pid=15221 duration=369(sec) >Feb 11 21:22:19 taft-04 xinetd[6270]: EXIT: qarsh status=0 pid=15222 duration=369(sec) >Feb 11 21:22:20 taft-04 kernel: GFS: fsid=TAFT:gfs2.3: jid=0: Trying to acquire journal lock... >Feb 11 21:22:20 taft-04 kernel: GFS: fsid=TAFT:gfs2.3: jid=0: Busy >Feb 11 21:22:20 taft-04 gfs_controld[6759]: gfs2 finish: needs recovery jid 0 nodeid 1 status 1 >Feb 11 21:22:54 taft-04 lvm[7105]: helter_skelter-syncd_log_2legs_1 is now in-sync
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 432109
:
294664
|
294665
|
294666
| 294667