Hide Forgot
Description of problem: Corosync crash was detected during cluster-wide shutdown Redundant ring feature was in use. Version-Release number of selected component (if applicable): corosync-1.4.1-1.el6.x86_64 How reproducible: quite rarely Steps to Reproduce: 1. config redundant ring cluster 2. issue service cman stop cluster-wide (via cssh for example) 3. Actual results: all nodes shut down, one crashed Expected results: no crash Additional info: Core was generated by `corosync -f'. Program terminated with signal 6, Aborted. #0 0x0000003acc232a45 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install cman-3.0.12.1-14.el6.x86_64 glibc-2.12-1.25.el6.x86_64 libgcc-4.4.5-6.el6.x86_64 libibverbs-1.1.4-2.el6.x86_64 librdmacm-1.0.10-2.el6.x86_64 libxml2-2.7.6-1.el6.x86_64 nspr-4.8.7-1.el6.x86_64 nss-3.12.9-9.el6.x86_64 nss-util-3.12.9-1.el6.x86_64 openais-1.1.1-7.el6.x86_64 zlib-1.2.3-25.el6.x86_64 (gdb) bt full #0 0x0000003acc232a45 in raise () from /lib64/libc.so.6 No symbol table info available. #1 0x0000003acc234225 in abort () from /lib64/libc.so.6 No symbol table info available. #2 0x0000003acc22b9d5 in __assert_fail () from /lib64/libc.so.6 No symbol table info available. #3 0x0000003155213aa1 in orf_token_rtr (instance=0x7f71169e7010, orf_token=0x7ffff0a0b5b0, fcc_allowed=0x7ffff0a0afcc) at totemsrp.c:2528 It happened after redundant ring testing, when all nodes rejoined and normal traffic was restored (both rings functional). Having cpgbench traffic might help reproducing the issue, for me it was like 2/10.
Created attachment 519659 [details] abrt produced crash dump
Created attachment 519660 [details] corosync-blackbox output after the crash
Jaroslav, Is this a redundant ring only bug? (Does this happen on single ring setups in your lab environment?) Thanks -steve
*** This bug has been marked as a duplicate of bug 732698 ***