Bug 1406101

Summary: Importing big ldif file with duplicate DNs throwing "unable to flush" error
Product: Red Hat Enterprise Linux 7 Reporter: Noriko Hosoi <nhosoi>
Component: 389-ds-baseAssignee: mreynolds
Status: CLOSED ERRATA QA Contact: Viktor Ashirov <vashirov>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 7.3CC: amsharma, nkinder, rmeggins
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 389-ds-base-1.3.6.1-3.el7 Doc Type: Bug Fix
Doc Text:
Bug: When an import fails, there are unable to flush error messages. Fix: When an import fails, it closes the database files before deleting them. Result: No more "unable to flush" error messages are delivered.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 21:12:24 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Noriko Hosoi 2016-12-19 17:46:40 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/49071

Ticket was cloned from Red Hat Bugzilla (product ''Red Hat Enterprise Linux 6''): [https://bugzilla.redhat.com/show_bug.cgi?id=1402012 Bug 1402012]


''Please note that this Bug is private and may not be accessible as it contains confidential Red Hat customer information.''


{{{
Description of problem: When importing big LDIF file with duplicated DNs, it
throws "unable to flush: No such file or directory" error in the error logs. I
encountered this issue while verifying
https://bugzilla.redhat.com/show_bug.cgi?id=1368209.

Version-Release number of selected component (if applicable):
389-ds-base-1.2.11.15-85


How reproducible: Consistently with big LDIF files with duplicate DNs.


Steps to Reproduce:
1. Install latest 389-ds-base
2. Create an instance and few entries to suffix "dc=importest,dc=com"
3. Import the LDIF(attached in the bz) file using ldif2db.pl script.
/usr/lib64/dirsrv/slapd-Inst1/ldif2db.pl -D "cn=Directory Manager" -w Secret123
-n importest1121 -s "dc=importest,dc=com" -i
/var/lib/dirsrv/slapd-Inst1/ldif/MyNew02_01.ldif
4. Check the output of error logs when online import is running.
tail -f /var/log/dirsrv/slapd-Inst1/errors

DB errors observed on the error logs:
libdb: importest1121/uid.db4: unable to flush: No such file or directory
libdb: importest1121/sn.db4: unable to flush: No such file or directory

Actual results:

[06/Dec/2016:08:32:56 -0500] - Bringing importest1121 offline...
[06/Dec/2016:08:32:56 -0500] - WARNING: Import is running with
nsslapd-db-private-import-mem on; No other process is allowed to access the
database
[06/Dec/2016:08:32:56 -0500] - import importest1121: Beginning import job...
[06/Dec/2016:08:32:56 -0500] - import importest1121: Index buffering enabled
with bucket size 19
[06/Dec/2016:08:32:56 -0500] - import importest1121: Processing file
"/var/lib/dirsrv/slapd-Inst1/ldif/MyNew02_01.ldif"
[06/Dec/2016:08:33:16 -0500] - import importest1121: Processed 40800 entries --
average rate 2040.0/sec, recent rate 2040.0/sec, hit ratio 0%
[06/Dec/2016:08:33:37 -0500] - import importest1121: Processed 81353 entries --
average rate 2033.8/sec, recent rate 2033.8/sec, hit ratio 100%
[06/Dec/2016:08:33:47 -0500] entryrdn-index - _entryrdn_insert_key: Same DN
(dn: ou=Netscape Servers,dc=importest,dc=com) is already in the entryrdn file
with different ID 160.  Expected ID is 100315.
[06/Dec/2016:08:33:47 -0500] - import importest1121: Duplicated DN detected:
"ou=Netscape Servers,dc=importest,dc=com": Entry ID: (100315)
[06/Dec/2016:08:33:47 -0500] - import importest1121: Aborting all Import
threads...
[06/Dec/2016:08:33:52 -0500] - import importest1121: Import threads aborted.
[06/Dec/2016:08:33:52 -0500] - import importest1121: Closing files...
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/nsuniqueid.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/parentid.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/cn.db4: unable to flush: No
such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/givenName.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/entryrdn.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/uid.db4: unable to flush:
No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/telephoneNumber.db4: unable
to flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/mail.db4: unable to flush:
No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/sn.db4: unable to flush: No
such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/id2entry.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/seeAlso.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - libdb: importest1121/objectclass.db4: unable to
flush: No such file or directory
[06/Dec/2016:08:33:53 -0500] - import importest1121: Import failed.


Expected results: More meaningful error messages.


Additional info:
}}}

Comment 2 Amita Sharma 2017-05-02 08:23:46 UTC
[0 root@qeos-205 export]# rpm -qa | grep 389
389-ds-base-1.3.6.1-9.el7.x86_64
389-ds-base-debuginfo-1.3.6.1-9.el7.x86_64
389-ds-base-snmp-1.3.6.1-9.el7.x86_64
389-ds-base-libs-1.3.6.1-9.el7.x86_64

[0 root@qeos-205 export]# /usr/lib64/dirsrv/slapd-qeos-205/ldif2db.pl -D "cn=Directory Manager" -w Secret123  -s "dc=example,dc=com" -n userRoot -i /export/data.ldif 
Successfully added task entry "cn=import_2017_5_2_4_18_2, cn=import, cn=tasks, cn=config"


Error logs
==========

[02/May/2017:04:18:02.683800764 -0400] - INFO - ldbm_back_ldif2ldbm - Bringing userRoot offline...
[02/May/2017:04:18:02.689439923 -0400] - INFO - dblayer_instance_start - Import is running with nsslapd-db-private-import-mem on; No other process is allowed to access the database
[02/May/2017:04:18:02.700635435 -0400] - INFO - import_main_offline - import userRoot: Beginning import job...
[02/May/2017:04:18:02.710785623 -0400] - INFO - import_main_offline - import userRoot: Index buffering enabled with bucket size 17
[02/May/2017:04:18:02.912644641 -0400] - INFO - import_producer - import userRoot: Processing file "/export/data.ldif"
[02/May/2017:04:18:02.917887589 -0400] - INFO - import_producer - import userRoot: Finished scanning file "/export/data.ldif" (3 entries)
[02/May/2017:04:18:03.138419513 -0400] - ERR - _entryrdn_insert_key - Same DN (dn: ou=myDups00001,dc=example,dc=com) is already in the entryrdn file with different ID 2.  Expected ID is 3.
[02/May/2017:04:18:03.141440304 -0400] - ERR - foreman_do_entryrdn - import userRoot: Duplicated DN detected: "ou=myDups00001,dc=example,dc=com": Entry ID: (3)
[02/May/2017:04:18:03.216392581 -0400] - ERR - import_run_pass - import userRoot: Thread monitoring returned: -23

[02/May/2017:04:18:03.217953877 -0400] - ERR - import_main_offline - import userRoot: Aborting all Import threads...
[02/May/2017:04:18:08.725796213 -0400] - ERR - import_main_offline - import userRoot: Import threads aborted.
[02/May/2017:04:18:08.735978386 -0400] - INFO - import_main_offline - import userRoot: Closing files...
[02/May/2017:04:18:08.748203709 -0400] - ERR - import_main_offline - import userRoot: Import failed.


cat /export/data.ldif
====================
dn: dc=example,dc=com
objectclass: top
objectclass: domain
dc: example

dn: ou=myDups00001,dc=example,dc=com
objectclass: top
objectclass: organizationalUnit
ou: myDups00001

dn: ou=myDups00001,dc=example,dc=com
objectclass: top
objectclass: organizationalUnit
ou: myDups00001

Comment 3 errata-xmlrpc 2017-08-01 21:12:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2086