|Summary:||When flushing, discard only memb_join messages|
|Product:||Red Hat Enterprise Linux 6||Reporter:||Jan Friesse <jfriesse>|
|Component:||corosync||Assignee:||Jan Friesse <jfriesse>|
|Status:||CLOSED ERRATA||QA Contact:||Cluster QE <mspqa-list>|
|Version:||6.3||CC:||jkortus, sbradley, sdake, syeghiay|
|Fixed In Version:||corosync-1.4.1-11.el6||Doc Type:||Bug Fix|
Cause Using rrp in some networks Consequence Some networks / CPU timing is causing very often marking ring as faulty in situations when it's not. Main problem seems to be that Corosync drops memb_join packets, but also ORF tokens Fix Drop only memb_join messages Result RRP interface should no longer be marked improperly as faulty.
|Last Closed:||2013-02-21 07:50:46 UTC||Type:||Bug|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
|Cloudforms Team:||---||Target Upstream Version:|
Description Jan Friesse 2012-08-22 11:05:20 UTC
Created attachment 606196 [details] Proposed patch Description of problem: When flushing, discard only memb_join messages Patch solves problem when 1 ring out of 2 went up/down quite often. The simplest setup to reproduce bug is following: - 2 VMs, connected by 2 network interfaces - OS: Linux - On one of the VMs, a test program sending some CPG messages (see the script "test_corosync.sh" joined to this mail for example) Here are the Corosync logs we get when we do this setup: Jun 06 16:23:40 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Jun 06 16:23:40 corosync [CPG ] chosen downlist: sender r(0) ip(192.168.56.104) r(1) ip(192.168.57.104) ; members(old:1 left:0) Jun 06 16:23:40 corosync [MAIN ] Completed service synchronization, ready to provide service. Jun 06 16:24:37 corosync [TOTEM ] Marking ringid 1 interface 192.168.57.105 FAULTY Jun 06 16:24:38 corosync [TOTEM ] Automatically recovered ring 1 Jun 06 16:25:33 corosync [TOTEM ] Marking ringid 1 interface 192.168.57.105 FAULTY Jun 06 16:25:34 corosync [TOTEM ] Automatically recovered ring 1 Jun 06 16:26:35 corosync [TOTEM ] Marking ringid 1 interface 192.168.57.105 FAULTY Jun 06 16:26:36 corosync [TOTEM ] Automatically recovered ring 1 (...) The second ring goes down about every 2 minutes and automatically back up right after. We spent some times looking for the commit that introduced this bug, and it appears it's due the following one: Corosync 1.3.3 -> 1.3.4: e27a58d93d0d3795beb550f87b660c9c04f11386 Corosync 1.4.1 -> 1.4.2: be608c050247e5f9c8266b8a0f9803cc0a3dc881 Commit message: Ignore memb_join messages during flush operations I had a look at this commit, and it seems to me it's dropping too many packets: Because of this commit, while totemrrp_recv_flush() is called, Corosync drops memb_join packets, but also ORF tokens. In the end, it seems that sometimes, we drop so many of them that Corosync marks the ring as faulty. To fix that, only memb_join messages are dropped now. How reproducible: 0.1% Steps to Reproduce: Described in description Actual results: Marking rrp interface as faulty too often Expected results: Oposite of actual results Additional info: For QE: Community patch, I was not able to reproduce it by myself.
Comment 2 Jan Friesse 2012-08-23 08:17:05 UTC
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause Using rrp in some networks Consequence Some networks / CPU timing is causing very often marking ring as faulty in situations when it's not. Main problem seems to be that Corosync drops memb_join packets, but also ORF tokens Fix Drop only memb_join messages Result RRP interface should no longer be marked improperly as faulty.
Comment 6 errata-xmlrpc 2013-02-21 07:50:46 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0497.html