When running applications which used the Corosync IPC library, some messages in the dispatch() function were lost or duplicated. This update properly checks the return values of the dispatch_put() function, returns the correct remaining bytes in the IPC ring buffer, and ensures that the IPC client is correctly informed about the real number of messages in the ring buffer. Now, messages in the dispatch() function are no longer lost or duplicated.
This is clone of Bug #907894 solving local IPC problems.
Created attachment 711845 [details]
Proposed patch - part 1
Created attachment 711846 [details]
Proposed patch - part 2 - check dispatch_put return code
Proposed patch 1 + 2 are reproducible by running https://github.com/jfriesse/csts/blob/master/tests/start-cfgstop-one-by-one-with-load.sh. When bug appear, there are duplicated messages in output (usually last 2 are duplicate).
Created attachment 711848 [details]
Proposed patch - part 3 - Take alignment in acount for free_bytes in ring buffer
"Unit test" https://github.com/jfriesse/csts/blob/master/tests/ipc-overflow.sh
Created attachment 711850 [details]
Proposed patch - part 4 - Properly lock pending_semops
Sadly, this problem is race so it's quite hard to reproduce. I had moderate success with two nodes and:
- node 1 - running corosync, cpgload -q -n 500 and cpgload -l 1 -n 500 -q
- node 2 - running corosync and cpgload -q -n 500
After 5+ hours, one of cpgload is terminated (it ends up return code 0, because CS_ERR_LIBRARY arrived).
With patch, I was able to run configuration above for 3 days.
Keep in mind that it CAN happen, that (because of extreme high load) cpgload may pause and corosync is terminated by OOM. This is not a bug.
Barry: Can you please give a try scratch build http://brewweb.devel.redhat.com/brew/taskinfo?taskID=5527619 ? I was able to run above test for 3 days, and I really hope it solves problem you are hitting (please use most unstable configuration, so irqbalance, corosync unpined, errata kernel, ...)
Created attachment 711875 [details]
Proposed patch - part 4 - Properly lock pending_semops - Try2
Created attachment 711877 [details]
Proposed patch - part 4 - Properly lock pending_semops - Try3
I have run SAS calibration on my 4 node cluster with the latest bits.
5 different types of test configurations were run to try and stress the cluster interconnect in different fashions
Each configuration was run 10 times (each of which stresses the system for at least 2-3 hours).
After 6 days of testing, I can report ZERO failures.
Verified using ipc-overflow.sh test:
FAIL on corosync-1.4.1-15.el6.x86_64 (RHEL6.4)
PASS on corosync-1.4.1-17.el6.x86_64 (RHEL6.5)
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.