Bug 903763

Summary: [abrt] corosync-2.0.3-1.fc17: qb_rb_chunk_alloc: Process /usr/sbin/corosync was killed by signal 7 (SIGBUS)
Product: [Fedora] Fedora Reporter: Franck C. <infos>
Component: libqbAssignee: David Vossel <dvossel>
Status: CLOSED WONTFIX QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 18CC: agk, dvossel, fdinitto, jfriesse, sdake
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: abrt_hash:ce426ecbee5b5f4c41332036cd2eef375ee43169
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-02-05 18:26:49 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
File: backtrace
none
File: cgroup
none
File: core_backtrace
none
File: dso_list
none
File: environ
none
File: limits
none
File: maps
none
File: open_fds
none
File: proc_pid_status
none
File: smolt_data
none
File: var_log_messages none

Description Franck C. 2013-01-24 19:37:41 UTC
Description of problem:
upgrade fedora 18 only possible from yum in case
of hardware raid1 LVM
But, after reboot, the raid partition is not recognized but HD and LVM yes.
so the result is linux thinks that LVM partitions are made on a single hard drive
and completely mess up the inodes....
for now F17 with kernel 3.6.6 is the last I can use.

Any solutions ?

Version-Release number of selected component:
corosync-2.0.3-1.fc17

Additional info:
backtrace_rating: 4
cmdline:        corosync
crash_function: qb_rb_chunk_alloc
executable:     /usr/sbin/corosync
kernel:         3.6.6-1.fc17.x86_64
remote_result:  NOTFOUND
uid:            0

Truncated backtrace:
Thread no. 1 (10 frames)
 #0 qb_rb_chunk_alloc at ringbuffer.c:424
 #1 _blackbox_vlogger at log_blackbox.c:69
 #2 qb_log_real_va_ at log.c:184
 #3 qb_log_real_ at log.c:212
 #4 message_handler_req_lib_cpg_mcast at cpg.c:1815
 #5 cs_ipcs_msg_process at ipc_glue.c:640
 #6 _process_request_ at ipcs.c:647
 #7 qb_ipcs_dispatch_connection_request at ipcs.c:755
 #8 _poll_dispatch_and_take_back_ at loop_poll.c:98
 #9 qb_loop_run_level at loop.c:45

Comment 1 Franck C. 2013-01-24 19:37:55 UTC
Created attachment 686925 [details]
File: backtrace

Comment 2 Franck C. 2013-01-24 19:38:32 UTC
Created attachment 686926 [details]
File: cgroup

Comment 3 Franck C. 2013-01-24 19:38:34 UTC
Created attachment 686927 [details]
File: core_backtrace

Comment 4 Franck C. 2013-01-24 19:38:36 UTC
Created attachment 686928 [details]
File: dso_list

Comment 5 Franck C. 2013-01-24 19:38:37 UTC
Created attachment 686929 [details]
File: environ

Comment 6 Franck C. 2013-01-24 19:38:39 UTC
Created attachment 686930 [details]
File: limits

Comment 7 Franck C. 2013-01-24 19:38:41 UTC
Created attachment 686931 [details]
File: maps

Comment 8 Franck C. 2013-01-24 19:38:43 UTC
Created attachment 686932 [details]
File: open_fds

Comment 9 Franck C. 2013-01-24 19:38:44 UTC
Created attachment 686933 [details]
File: proc_pid_status

Comment 10 Franck C. 2013-01-24 19:38:46 UTC
Created attachment 686934 [details]
File: smolt_data

Comment 11 Franck C. 2013-01-24 19:38:48 UTC
Created attachment 686935 [details]
File: var_log_messages

Comment 12 Jan Friesse 2013-01-25 08:43:19 UTC
Looks more like libqb problem (cpg.c:1815 is just log_printf(LOGSYS_LEVEL_TRACE, "got mcast request on %p", conn);) (maybe fixed), so reassigning to libqb.

Comment 13 Franck C. 2013-01-25 13:52:45 UTC
Hi,

after some tests to understand what's happeneing it appears
that dmraid don't start intel hostraid (emb-2 chipset) but noticed that in new kernels since 3.6.7. 3.6.6 dmraid works well. So I think it's a kernel dmraid module problem between other problems like you mentionned.
by now is it possible to downgrade dmraid with new kernels ?

Comment 14 Franck C. 2013-02-16 01:47:32 UTC
finally I'm not sure of what I said above.
there is definetely something that changed from 3.6.6 to 3.6.7 
where fake raid is fucked up by a F18 component somewhere since fake raid
works well only at reboot after a minimal installation

Comment 15 Fedora End Of Life 2013-12-21 10:47:31 UTC
This message is a reminder that Fedora 18 is nearing its end of life.
Approximately 4 (four) weeks from now Fedora will stop maintaining
and issuing updates for Fedora 18. It is Fedora's policy to close all
bug reports from releases that are no longer maintained. At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '18'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 18's end of life.

Thank you for reporting this issue and we are sorry that we may not be 
able to fix it before Fedora 18 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior to Fedora 18's end of life.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

Comment 17 Fedora End Of Life 2014-02-05 18:26:49 UTC
Fedora 18 changed to end-of-life (EOL) status on 2014-01-14. Fedora 18 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.