RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 772150 - ipa-replica-manage re-initialize causes ALL Severs to rerun memberof fixup
Summary: ipa-replica-manage re-initialize causes ALL Severs to rerun memberof fixup
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ipa
Version: 6.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Rob Crittenden
QA Contact: IDM QE LIST
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-06 07:20 UTC by Martin Kosek
Modified: 2012-06-20 13:28 UTC (History)
3 users (show)

Fixed In Version: ipa-2.2.0-1.el6
Doc Type: Bug Fix
Doc Text:
Cause: Some new IPA replica agreements may miss a list of attributes that should be excluded from replication. IPA attributes that are generated locally on each master by LDAP server plugin (memberOf attribute in this case) are being replicated. This may force all IPA replicas' LDAP servers to re-process memberOf data and thus increase load on the LDAP servers. Consequence: When many entries in a short time are being added on a replica or when a replica is being re-initialized from other master, all replicas are flooded with memberOf changes which may cause high load on all replica machines and cause performance issues. Fix: New replica agreements added by ipa-replica-install do not miss a list of attributes excluded from replication. Result: Re-initialization or high number of added entries in IPA LDAP server should not cause performance issues caused by memberOf processing. Old replica agreements are also updated to contain the correct list of attributes excluded from replication.
Clone Of:
Environment:
Last Closed: 2012-06-20 13:28:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2012:0819 0 normal SHIPPED_LIVE ipa bug fix and enhancement update 2012-06-19 20:34:17 UTC

Description Martin Kosek 2012-01-06 07:20:10 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/freeipa/ticket/2213

I have a multimaster infrastructure with 3 core FreeIPA servers and 10 supporting (procedurally read-only) FreeIPA servers.

I notice that occasionally 1 of the systems starts producing errors filling up /var/log/dirsrv/slapd-DOMAIN-COM/errors:
Replica has a different generation ID than the local data
(I suspect this is due to ntp problems that I am trying to work out)

http://www.centos.org/docs/5/html/CDS/ag/8.0/Managing_Replication-Troubleshooting_Replication_Related_Problems.html

^ This document suggests that I should re-initialize the problematic system from one of the core master servers.

Upon so doing, I am finding that all 13 servers CPU's spike to 100% of 1 core while they re-process memberof data... Even though there are many many cores in these systems the intense & single threaded nature of this process causes a performance hit in all 13 data centers for all clients.

Am I reading the documentation wrong? Shouldn't a re-initialization of the problematic host only cause a replication: master -> slave + slave memberof fixup?

This seems like a fairly severe performance effecting bug.

How to reproduce:

Setup A 3 participant FreeIPA replica build.
1 master -> 2 slaves

Perform an ipa-replica-manage re-initialize --from=master on one of the slaves.

Notice that the other slave performs a memberof fixup

NOTE: This is a exponential problem as the more hosts/users groups/hostgroups/hbacrules/sudorules you have, the longer and more noticeable / performance effecting this is.

Comment 1 Martin Kosek 2012-01-16 11:33:15 UTC
Fixed upstream:

master: 0d3cd4c3840c1e67adc85f17debe0f6c5f04b309
ipa-2-2: d20a11aa90fbee923bbf7781399e67c623f8a3da

Comment 3 Gowrishankar Rajaiyan 2012-03-30 06:39:36 UTC
Test 1: for groups

MASTER
[root@primenova ~]# ipa group-show group1 --all --raw
  dn: cn=group1,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  cn: group1
  description: g1
  gidnumber: 981601872
  member: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  ipauniqueid: 1c5f03c6-79ae-11e1-89d1-52540063d50e
  memberindirect: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  objectclass: top
  objectclass: groupofnames
  objectclass: nestedgroup
  objectclass: ipausergroup
  objectclass: ipaobject
  objectclass: posixgroup
[root@primenova ~]# ipa group-show group2 --all --raw
  dn: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  cn: group2
  description: g2
  gidnumber: 981601873
  member: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  ipauniqueid: 22b9f118-79ae-11e1-a151-52540063d50e
  memberof: cn=group1,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  objectclass: top
  objectclass: groupofnames
  objectclass: nestedgroup
  objectclass: ipausergroup
  objectclass: ipaobject
  objectclass: posixgroup
[root@primenova ~]# ipa group-show group3 --all --raw
  dn: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  cn: group3
  description: g3
  gidnumber: 981601874
  ipauniqueid: 286b43dc-79ae-11e1-b8b6-52540063d50e
  memberof: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberofindirect: cn=group1,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  objectclass: top
  objectclass: groupofnames
  objectclass: nestedgroup
  objectclass: ipausergroup
  objectclass: ipaobject
  objectclass: posixgroup
[root@primenova ~]#


REPLICA1:
[root@rodimus ~]# ipa group-remove-member
Group name: group1
[member user]:
[member group]: group2
  Group name: group1
  Description: g1
  GID: 981601872
---------------------------
Number of members removed 1
---------------------------
[root@rodimus ~]#

REPLICA2: memberOf group1 is replicated
[root@goldbug ~]# ipa group-show group2 --all --raw
  dn: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  cn: group2
  description: g2
  gidnumber: 981601873
  member: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  ipauniqueid: 22b9f118-79ae-11e1-a151-52540063d50e
  objectclass: top
  objectclass: groupofnames
  objectclass: nestedgroup
  objectclass: ipausergroup
  objectclass: ipaobject
  objectclass: posixgroup
[root@goldbug ~]#

Result: memberOf values are all consistent.

Comment 4 Gowrishankar Rajaiyan 2012-03-30 06:51:53 UTC
Test 2: for users

REPLICA2:

[root@rodimus ~]# ipa user-show shanks --all --raw | grep -v objectclass
  dn: uid=shanks,cn=users,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  uid: shanks
  givenname: g
  sn: r
  cn: g r
  displayname: g r
  initials: gr
  homedirectory: /home/shanks
  gecos: g r
  loginshell: /bin/sh
  krbprincipalname: shanks.PNQ.REDHAT.COM
  uidnumber: 981701000
  gidnumber: 981701000
  nsaccountlock: False
  has_password: False
  has_keytab: False
  ipauniqueid: 425671f6-7a29-11e1-a855-5254001857c6
  krbpwdpolicyreference: cn=global_policy,cn=LAB.ENG.PNQ.REDHAT.COM,cn=kerberos,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=ipausers,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group1,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  mepmanagedentry: cn=shanks,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
[root@rodimus ~]# 

REPLICA2:
[root@goldbug ~]# ipa group-remove-member 
Group name: group1
[member user]: shanks
[member group]: 
  Group name: group1
  Description: g1
  GID: 981601872
---------------------------
Number of members removed 1
---------------------------
[root@goldbug ~]# 

REPLICA1: "memberof: cn=group1,..." is removed
[root@rodimus ~]# ipa user-show shanks --all --raw | grep -v objectclass
  dn: uid=shanks,cn=users,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  uid: shanks
  givenname: g
  sn: r
  cn: g r
  displayname: g r
  initials: gr
  homedirectory: /home/shanks
  gecos: g r
  loginshell: /bin/sh
  krbprincipalname: shanks.PNQ.REDHAT.COM
  uidnumber: 981701000
  gidnumber: 981701000
  nsaccountlock: False
  has_password: False
  has_keytab: False
  ipauniqueid: 425671f6-7a29-11e1-a855-5254001857c6
  krbpwdpolicyreference: cn=global_policy,cn=LAB.ENG.PNQ.REDHAT.COM,cn=kerberos,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=ipausers,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  mepmanagedentry: cn=shanks,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
[root@rodimus ~]# 

MASTER: "memberof: cn=group1,..." is removed
[root@primenova ~]# ipa user-show shanks --all --raw | grep -v objectclass
  dn: uid=shanks,cn=users,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  uid: shanks
  givenname: g
  sn: r
  cn: g r
  displayname: g r
  initials: gr
  homedirectory: /home/shanks
  gecos: g r
  loginshell: /bin/sh
  krbprincipalname: shanks.PNQ.REDHAT.COM
  uidnumber: 981701000
  gidnumber: 981701000
  nsaccountlock: False
  has_password: False
  has_keytab: False
  ipauniqueid: 425671f6-7a29-11e1-a855-5254001857c6
  krbpwdpolicyreference: cn=global_policy,cn=LAB.ENG.PNQ.REDHAT.COM,cn=kerberos,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=ipausers,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group2,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  memberof: cn=group3,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
  mepmanagedentry: cn=shanks,cn=groups,cn=accounts,dc=lab,dc=eng,dc=pnq,dc=redhat,dc=com
[root@primenova ~]# 

Result: memberOf values are all consistent.

Comment 6 Gowrishankar Rajaiyan 2012-03-30 07:03:14 UTC
Verified: ipa-server-2.2.0-7.el6.x86_64

Comment 7 Martin Kosek 2012-04-19 20:13:34 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause: Some new IPA replica agreements may miss a list of attributes that should be excluded from replication. IPA attributes that are generated locally on each master by LDAP server plugin (memberOf attribute in this case) are being replicated. This may force all IPA replicas' LDAP servers to re-process memberOf data and thus increase load on the LDAP servers.
Consequence: When many entries in a short time are being added on a replica or when a replica is being re-initialized from other master, all replicas are flooded with memberOf changes which may cause high load on all replica machines and cause performance issues.
Fix: New replica agreements added by ipa-replica-install do not miss a list of attributes excluded from replication.
Result: Re-initialization or high number of added entries in  IPA LDAP server should not cause performance issues caused by memberOf processing.

Comment 8 Martin Kosek 2012-04-20 09:01:02 UTC
    Technical note updated. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    Diffed Contents:
@@ -1,4 +1,4 @@
 Cause: Some new IPA replica agreements may miss a list of attributes that should be excluded from replication. IPA attributes that are generated locally on each master by LDAP server plugin (memberOf attribute in this case) are being replicated. This may force all IPA replicas' LDAP servers to re-process memberOf data and thus increase load on the LDAP servers.
 Consequence: When many entries in a short time are being added on a replica or when a replica is being re-initialized from other master, all replicas are flooded with memberOf changes which may cause high load on all replica machines and cause performance issues.
 Fix: New replica agreements added by ipa-replica-install do not miss a list of attributes excluded from replication.
-Result: Re-initialization or high number of added entries in  IPA LDAP server should not cause performance issues caused by memberOf processing.+Result: Re-initialization or high number of added entries in  IPA LDAP server should not cause performance issues caused by memberOf processing. Old replica agreements are also updated to contain the correct list of attributes excluded from replication.

Comment 10 errata-xmlrpc 2012-06-20 13:28:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2012-0819.html


Note You need to log in before you can comment on or make changes to this bug.