Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/49067
In an environment where memberOf is enabled on a master and disabled on another master the following happens.
1) Several pieces of data, including invalid data is added to the master with memberOf disabled.
2) GOOD BAD GOOD data is sent to the master with memberOf enabled via replication with the other master.
3) The master with memberOf disabled skips and notes that it skips the BAD data. The master that is sending the replicated data also notes that it skipped the BAD data. Replication is still shown as good.
So:
A) The consumer isn't severing the connection.
B) The supplier should consider schema validation replication error as a hard error and not skip it.
More info and a recreation case for this can be found in email "Re: CASE 01751197". Ludwig was able to reproduce and left these notes
no, I have on the consumer side:
[09/Dec/2016:16:44:34.222683287 +0100] conn=4 op=0 BIND dn="cn=replrepl,cn=config" method=128 version=3
[09/Dec/2016:16:44:34.222905979 +0100] conn=4 op=0 RESULT err=0 tag=97 nentries=0 etime=0 dn="cn=replrepl,cn=config"
[09/Dec/2016:16:44:34.228996479 +0100] conn=4 op=1 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[09/Dec/2016:16:44:34.229495450 +0100] conn=4 op=1 RESULT err=0 tag=101 nentries=1 etime=0
[09/Dec/2016:16:44:34.229661093 +0100] conn=4 op=2 SRCH base="" scope=0 filter="(objectClass=*)" attrs="supportedControl supportedExtension"
[09/Dec/2016:16:44:34.230064532 +0100] conn=4 op=2 RESULT err=0 tag=101 nentries=1 etime=0
[09/Dec/2016:16:44:34.230289566 +0100] conn=4 op=3 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[09/Dec/2016:16:44:34.279233418 +0100] conn=4 op=3 RESULT err=0 tag=120 nentries=0 etime=0
[09/Dec/2016:16:44:34.437387789 +0100] conn=4 op=4 ADD dn="cn=g1,dc=example,dc=com"
[09/Dec/2016:16:44:34.555185922 +0100] conn=4 op=4 RESULT err=65 tag=105 nentries=0 etime=0 csn=584ad162000000010000
[09/Dec/2016:16:44:34.654083075 +0100] conn=4 op=5 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[09/Dec/2016:16:44:34.674259915 +0100] conn=4 op=5 RESULT err=0 tag=120 nentries=0 etime=0
[09/Dec/2016:16:44:48.012642473 +0100] conn=4 op=6 EXT oid="2.16.840.1.113730.3.5.12" name="replication-multimaster-extop"
[09/Dec/2016:16:44:48.064483314 +0100] conn=4 op=6 RESULT err=0 tag=120 nentries=0 etime=0
[09/Dec/2016:16:44:48.229068063 +0100] conn=4 op=7 ADD dn="cn=g1,dc=example,dc=com"
[09/Dec/2016:16:44:48.360800339 +0100] conn=4 op=7 RESULT err=65 tag=105 nentries=0 etime=0 csn=584ad162000000010000
[09/Dec/2016:16:44:48.372697051 +0100] conn=4 op=8 ADD dn="cn=yyy,dc=example,dc=com"
[09/Dec/2016:16:44:48.513254149 +0100] conn=4 op=8 RESULT err=0 tag=105 nentries=0 etime=0 csn=584ad170000000010000
[09/Dec/2016:16:44:48.651607165 +0100] conn=4 op=9 EXT oid="2.16.840.1.113730.3.5.5" name="replication-multimaster-extop"
[09/Dec/2016:16:44:48.669904404 +0100] conn=4 op=9 RESULT err=0 tag=120 nentries=0 etime=0
[09/Dec/2016:16:45:48.735018262 +0100] conn=4 op=11 UNBIND
[09/Dec/2016:16:45:48.735053744 +0100] conn=4 op=11 fd=64 closed - U1
and on the supplier:
[09/Dec/2016:16:44:48.468835078 +0100] - DEBUG - NSMMReplicationPlugin - repl5_inc_update_from_op_result - agmt="cn=meTo_localhost.localdomain:38942" (localhost:38942): Consumer failed to replay change (uniqueid 5118e701-be2611e6-88d1a1a4-9fa8583b, CSN 584ad162000000010000): Object class violation (65). Skipping.
The test case is:
have two masters in sync
add an entry E not allowing memberof attribute
enable memberof on master2
add a group G containing E as member on master1, it is accepted and replicated
on master2 the replicated add fails because of err=65 raised by memberof plugin
do another change om master1 it will be replicated and G is missing now on master2
about the severity: it is a consequence of a sttange memberof configuration, but on the other hand it leads to inconsistent data and is difficult to find and repair