RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1353629 - DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup
Summary: DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: wibrown@redhat.com
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-07 15:31 UTC by Punit Kundal
Modified: 2020-09-13 21:47 UTC (History)
4 users (show)

Fixed In Version: 389-ds-base-1.3.5.10-4.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-03 20:43:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1975 0 None closed DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup 2020-12-24 16:14:12 UTC
Red Hat Product Errata RHSA-2016:2594 0 normal SHIPPED_LIVE Moderate: 389-ds-base security, bug fix, and enhancement update 2016-11-03 12:11:08 UTC

Description Punit Kundal 2016-07-07 15:31:46 UTC
Description of problem:
DS shuts down automatically if dnaThreshold is set to 0 in a MMR setup

Version-Release number of selected component (if applicable):
[root@ds ~]# rpm -qa | grep 389
389-ds-base-libs-1.3.5.10-1.el7.x86_64
389-ds-base-1.3.5.10-1.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
It's a MMR setup with 2 masters
 
1. Enabled the dna plugin on both instances
 
2. Added the required container entries for the dna plugin on both masters
 
3. Now on master1 which will basically transfer the next range to master2, I added the dna plugin configuration entry like this:
dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnaNextValue: 1
dnaMaxValue: 50
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 0
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123
dnaNextRange: 80-90
 
As can be seen in the above entry, I've set dnaThreshold to '0' and the dnaNextRange to 80-90
 
4. Then on master2, I added dna plugin configuration entry like this,
dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnanextvalue: 61
dnaMaxValue: 70
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 2
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123
 
master2 only has 10 numbers which it can allocate automatically for uidNumber and gidNumber attributes
 
5. Then I added added the required replication configuration entries on both masters to configure replication
 
6. Added 10 entries on master2 to exhaust its available range
 
7. Did an ldapsearch on master2, all the 10 entries were added and the attributes uidNumber and gidNumber were
set accordingly by dna plugin, so far so good
 
8. Now I tried adding 11th entry on master2, It basically failed with this error
ldap_add: Operations error (1)
additional info: Allocation of a new value for range cn=account uids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed.
 
9. Checked the error logs on master2, this is what they show
[01/Jul/2016:18:11:00.370146064 +051800] slapi_ldap_bind - Error: could not send bind request for id [uid=dnaAdmin,ou=People,dc=example,dc=com] authentication mechanism [SIMPLE]: error -1 (Can't contact LDAP server), system error -5987 (Invalid function argument.), network error 107 (Transport endpoint is not connected, host "ds.example.com:3389")
[01/Jul/2016:18:12:49.411378922 +051800] dna-plugin - dna_request_range: Error sending range extension extended operation request to server ds.example.com:3389 [error -1]
[01/Jul/2016:18:12:49.536799341 +051800] dna-plugin - dna_pre_op: no more values available!!
[01/Jul/2016:18:14:12.466096468 +051800] slapi_ldap_bind - Error: could not send bind request for id [uid=dnaAdmin,ou=People,dc=example,dc=com] authentication mechanism [SIMPLE]: error -1 (Can't contact LDAP server), system error -5987 (Invalid function argument.), network error 107 (Transport endpoint is not connected, host "ds.example.com:3389")
 
10. Upon investing with status-dirsrv, I found that master1 was killed
[root@ds ~]# status-dirsrv mast1
● dirsrv - 389 Directory Server mast1.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled; vendor preset: disabled)
   Active: failed (Result: signal) since Thu 2016-07-07 20:35:23 IST; 9min ago
 Main PID: 5852 (code=killed, signal=FPE)
   Status: "slapd started: Ready to process requests"
 
The status however here still shows Ready to process requests
 
11. Restarted master1 with start-dirsrv master1
 
12. Checked error logs on master1, this is what they show
[01/Jul/2016:18:18:04.629383441 +051800] Detected Disorderly Shutdown last time Directory Server was running, recovering database.
[01/Jul/2016:18:18:04.816326545 +051800] NSMMReplicationPlugin - changelog program - _cl5NewDBFile: PR_DeleteSemaphore: /var/lib/dirsrv/slapd-mast1/changelogdb/62f98283-3f8811e6-8e9ac15d-c133e87f.sema; NSPR error - -5943
[01/Jul/2016:18:18:04.974329732 +051800] NSMMReplicationPlugin - replica_check_for_data_reload: Warning: disordely shutdown for replica dc=example,dc=com. Check if DB RUV needs to be updated
[01/Jul/2016:18:18:05.011287516 +051800] slapd started.  Listening on All Interfaces port 3389 for LDAP requests
 
13. Did an ldapsearch on master1 and found that only some of the 10 entries that were added on master2 were replicated to master1 and others were missing

Comment 3 Noriko Hosoi 2016-07-08 19:08:16 UTC
Upstream ticket:
https://fedorahosted.org/389/ticket/48916

Comment 5 Punit Kundal 2016-07-18 12:31:41 UTC
RHEL:
RHEL 7.3 x86_64 Server

DS builds:
[root@org47 ~]# rpm -qa | grep 389-ds-base
389-ds-base-1.3.5.10-4.el7.x86_64
389-ds-base-snmp-1.3.5.10-4.el7.x86_64
389-ds-base-libs-1.3.5.10-4.el7.x86_64

Steps Performed:
1. Created two standalone instances as master1 and master2
 
2. Configured DNA plugin on both instances,
 
Below is the DNA plugin configuration entry on master1
dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnaNextValue: 1
dnaMaxValue: 20
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 0
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123
dnaNextRange: 41-50
 
dnaThreshold is set to 0 on master1
 
Below is the DNA plugin configuration entry on master2
 
dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: dnaPluginConfig
cn: Account UIDs
dnatype: uidNumber
dnatype: gidNumber
dnafilter: (objectclass=posixAccount)
dnascope: ou=People,dc=example,dc=com
dnanextvalue: 21
dnaMaxValue: 30
dnasharedcfgdn: cn=Account UIDs,ou=Ranges,dc=example,dc=com
dnaThreshold: 2
dnaRangeRequestTimeout: 60
dnaMagicRegen: magic
dnaRemoteBindDN: uid=dnaAdmin,ou=People,dc=example,dc=com
dnaRemoteBindCred: secret123
 
master2 only has 10 numbers available for allocation
 
3. Configured a 2x MMR setup by added required replication configuration entries on both masters
 
4. Added 10 users on master2 to exhaust its available range
[root@org47 dna_setup]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -a -f users.ldif
adding new entry "uid=testuser21,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser22,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser23,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser24,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser25,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser26,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser27,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser28,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser29,ou=People,dc=example,dc=com"
 
adding new entry "uid=testuser30,ou=People,dc=example,dc=com"
 
5. Checked the status of master1
[root@org47 ~]# status-dirsrv master1
● dirsrv - 389 Directory Server master1.
   Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2016-09-18 17:31:58 IST; 9min ago
 Main PID: 19494 (ns-slapd)
   Status: "slapd started: Ready to process requests"
 
master1 is still running, no crash here
 
6. Tried adding another entry on master2 as below
[root@org47 dna_setup]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -a -f user.ldif
adding new entry "uid=testuser32,ou=People,dc=example,dc=com"
ldap_add: Operations error (1)
        additional info: Allocation of a new value for range cn=account uids,cn=distributed numeric assignment plugin,cn=plugins,cn=config failed! Unable to proceed.
     
7. Modified the value of dnaThreshold on master1
[root@org47 dna_setup]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 389
dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
changetype: modify
replace: dnaThreshold
dnaThreshold: 5
modifying entry "cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config"
 
9. Once again tried to add an entry on master2
[root@org47 dna_setup]# ldapmodify -x -D 'cn=Directory Manager' -w secret123 -h localhost -p 1389 -a -f user.ldif
adding new entry "uid=testuser31,ou=People,dc=example,dc=com"
 
entry was added successfully
 
10. Verified that range was transferred properly
[root@org47 dna_setup]# ldapsearch -xLLL -b 'uid=testuser31,ou=People,dc=example,dc=com' -h localhost -p 1389
dn: uid=testuser31,ou=People,dc=example,dc=com
cn: test user
homeDirectory: /home/testuser
objectClass: top
objectClass: person
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: organizationalPerson
sn: user
uid: testuser
uid: testuser31
uidNumber: 46
gidNumber: 46

Comment 7 errata-xmlrpc 2016-11-03 20:43:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2594.html


Note You need to log in before you can comment on or make changes to this bug.