RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1267405 - many attrlist_replace errors in connection with cleanallruv
Summary: many attrlist_replace errors in connection with cleanallruv
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: 389-ds-base
Version: 6.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
Petr Bokoc
URL:
Whiteboard:
Depends On:
Blocks: 1273132
TreeView+ depends on / blocked
 
Reported: 2015-09-30 00:27 UTC by Noriko Hosoi
Modified: 2020-09-13 21:32 UTC (History)
8 users (show)

Fixed In Version: 389-ds-base-1.2.11.15-67.el6
Doc Type: Bug Fix
Doc Text:
Directory Server no longer logs false `attrlist_replace` errors Previously, Directory Server could in some circumstances repeatedly log `attrlist_replace` error messages in error. This problem was caused by memory corruption due to a wrong memory copy function being used. The memory copy function has been replaced with `memmove`, which prevents this case memory corruption, and the server no longer logs these error messages.
Clone Of:
: 1273132 (view as bug list)
Environment:
Last Closed: 2016-05-10 19:21:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1614 0 None closed many attrlist_replace errors in connection with cleanallruv 2020-11-25 20:58:30 UTC
Red Hat Product Errata RHBA-2016:0737 0 normal SHIPPED_LIVE 389-ds-base bug fix and enhancement update 2016-05-10 22:29:13 UTC

Description Noriko Hosoi 2015-09-30 00:27:20 UTC
there are reports of frequent error messages like:

{{{

[22/Sep/2015:17:20:45 +0200] attrlist_replace - attr_replace (nsds50ruv, {replicageneration} 56012d2f000000040000) failed.
[22/Sep/2015:17:21:15 +0200] attrlist_replace - attr_replace (nsds50ruv, {replicageneration} 56012d2f000000040000) failed.
[22/Sep/2015:17:21:45 +0200] attrlist_replace - attr_replace (nsds50ruv, {replicageneration} 56012d2f000000040000) failed.
}}}

these messages appear mostly in connection with cleanallruv, but can appear independently

Comment 1 Noriko Hosoi 2015-10-09 00:19:20 UTC
Hi Ludwig,

Could there be verification steps for this bug?
Just do cleanallruv, and if we don't see "attrlist_replace - attr_replace (nsds50ruv, {replicageneration} 56012d2f000000040000) failed." message in the error log, we could say the fix is verified?
Thanks!

Comment 2 Ludwig 2015-10-12 13:40:05 UTC
Hi Noriko,

there are two requirements to trigger the bug.
- the RUV needs to contain a lareg number of RIDs, I have seen this with 9 replicas
- the RID to be cleaned should be in the beginning of the replica list in the ruv (position 1 is always the local RID), so the RID on pos 2 or 3 should do.

so if you have a RUV like:
nsds50ruv: {replicageneration} 51dc3bac000000640000
nsds50ruv: {replica 200 ldap://localhost:5200} 5609f0a4000000c80000 560e0886000000c80000
nsds50ruv: {replica 100 ldap://localhost:5100} 5609deae000000640000 560e0970000100640000
nsds50ruv: {replica 500 ldap://localhost:5500} 5609f0a4000000c80000 560e0886000000c80000
nsds50ruv: {replica 400 ldap://localhost:5400} 5609deae000000640000 560e0970000100640000
nsds50ruv: {replica 600 ldap://localhost:5600} 5609f0a4000000c80000 560e0886000000c80000
nsds50ruv: {replica 700 ldap://localhost:5700} 5609deae000000640000 560e0970000100640000
nsds50ruv: {replica 800 ldap://localhost:5800} 5609f0a4000000c80000 560e0886000000c80000
nsds50ruv: {replica 900 ldap://localhost:5900} 5609deae000000640000 560e0970000100640000
nsds50ruv: {replica 300 ldap://localhost:5300} 5609f0a4000000c80000 560e0886000000c80000

then clean rid 100 and don't see the message the fix is verified

Comment 8 Sankar Ramalingam 2016-03-16 14:47:29 UTC
6 master replication.

PORT="1189" ; ldapsearch -LLL -x -p $PORT -h localhost -D "cn=Directory Manager" -w Secret123 -b "dc=passsync,dc=com" -s one '(&(nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff)(objectclass=nstombstone))' nsds50ruv

dn: nsuniqueid=ffffffff-ffffffff-ffffffff-ffffffff,dc=passsync,dc=com
nsds50ruv: {replicageneration} 56d266c6000008a30000
nsds50ruv: {replica 2211 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:1189}
  56d2699a000008a30000 56e96992001708a30000
nsds50ruv: {replica 2216 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:3289}
nsds50ruv: {replica 2215 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:3189}
nsds50ruv: {replica 2214 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:2289}
nsds50ruv: {replica 2213 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:2189}
nsds50ruv: {replica 2212 ldap://qe-blade-01.idmqe.lab.eng.bos.redhat.com:1289}

Comment 9 Sankar Ramalingam 2016-03-16 15:17:38 UTC
[root@qe-blade-01 MMR_WINSYNC]# ldapmodify -x -p 1189 -h localhost -D "cn=Directory Manager" -w Secret123 -avf /export/cleanruv.ldif 
ldap_initialize( ldap://localhost:1189 )
add cn:
	M3clean
add objectclass:
	extensibleObject
add replica-base-dn:
	dc=passsync,dc=com
add replica-id:
	2213
adding new entry "cn=M3clean,cn=cleanallruv,cn=tasks,cn=config"
modify complete

[root@qe-blade-01 MMR_WINSYNC]# cat /export/cleanruv.ldif 
dn: cn=M3clean,cn=cleanallruv,cn=tasks,cn=config
cn: M3clean
objectclass: extensibleObject
replica-base-dn: dc=passsync,dc=com
replica-id: 2213

[root@qe-blade-01 MMR_WINSYNC]# tail -f /var/log/dirsrv/slapd-M1/errors
[16/Mar/2016:10:51:46 -0400] NSMMReplicationPlugin - CleanAllRUV Task (rid 2213): Waiting for all the replicas to be cleaned... 
[16/Mar/2016:10:51:47 -0400] NSMMReplicationPlugin - CleanAllRUV Task (rid 2213): Waiting for all the replicas to finish cleaning... 
[16/Mar/2016:10:51:47 -0400] NSMMReplicationPlugin - CleanAllRUV Task (rid 2213): Not all replicas finished cleaning, retrying in 10 seconds 
[16/Mar/2016:10:51:57 -0400] NSMMReplicationPlugin - CleanAllRUV Task (rid 2213): Successfully cleaned rid(2213). 


[root@qe-blade-01 MMR_WINSYNC]# grep -i "attr_replace*.*failed*" /var/log/dirsrv/slapd-*/errors
[root@qe-blade-01 MMR_WINSYNC]# echo $?
1
[root@qe-blade-01 MMR_WINSYNC]# 

No errors observed as "attrlist_replace - attr_replace". Hence, marking the bug as Verified.

[root@qe-blade-01 MMR_WINSYNC]# rpm -qa |grep -i 389-ds
389-ds-base-libs-1.2.11.15-74.el6.x86_64
389-ds-base-1.2.11.15-74.el6.x86_64
389-ds-base-debuginfo-1.2.11.15-73.el6.x86_64
389-ds-base-devel-1.2.11.15-74.el6.x86_64

Comment 11 errata-xmlrpc 2016-05-10 19:21:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0737.html


Note You need to log in before you can comment on or make changes to this bug.