RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1314729 - change severity of some messages related to "keep alive" entries
Summary: change severity of some messages related to "keep alive" entries
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: 389-ds-base
Version: 6.7
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On: 1313258
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-04 10:59 UTC by Jaroslav Reznik
Modified: 2016-03-22 14:47 UTC (History)
7 users (show)

Fixed In Version: 389-ds-base-1.2.11.15-72.el6_7
Doc Type: Bug Fix
Doc Text:
Keep alive entries are used to prevent skipped updates from being evaluated several times in a fractional replication. If a large number of updates is skipped, these entries can be updated very frequently. Before the keep alive entries are updated, it is tested that they exist. Previously, the test was being logged at a "Fatal" log level, so it was always logged no matter which log level had been set. As a consequence, error logs were filled with unnecessary messages. With this update, the log level for keep alive entry creation has been changed from "Fatal" to "Replication debugging" (8192), and the error log file is no longer filled with "Fatal" messages.
Clone Of: 1313258
Environment:
Last Closed: 2016-03-22 14:47:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0468 0 normal SHIPPED_LIVE 389-ds-base bug fix update 2016-03-22 18:46:51 UTC

Description Jaroslav Reznik 2016-03-04 10:59:25 UTC
This bug has been copied from bug #1313258 and has been proposed
to be backported to 6.7 z-stream (EUS).

Comment 5 Amita Sharma 2016-03-10 07:43:45 UTC
[root@ipaqa64vml ~]# rpm -qa | grep 389
389-ds-base-libs-1.2.11.15-72.el6_7.x86_64
389-ds-base-1.2.11.15-72.el6_7.x86_64


Errors logs without replication log level::
[root@ipaqa64vml ~]# tail -f /var/log/dirsrv/slapd-M2/errors
[10/Mar/2016:02:31:20 -0500] - import userRoot: Workers finished; cleaning up...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Workers cleaned up.
[10/Mar/2016:02:31:20 -0500] - import userRoot: Indexing complete.  Post-processing...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Generating numSubordinates complete.
[10/Mar/2016:02:31:20 -0500] - import userRoot: Flushing caches...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Closing files...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Import complete.  Processed 11 entries in 3 seconds. (3.67 entries/sec)
[10/Mar/2016:02:31:21 -0500] NSMMReplicationPlugin - multimaster_be_state_change: replica dc=example,dc=com is coming online; enabling replication

[root@ipaqa64vml yum.repos.d]# tail -f /var/log/dirsrv/slapd-M1/errors
[10/Mar/2016:02:20:08 -0500] - slapd started.  Listening on All Interfaces port 389 for LDAP requests
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - Beginning total update of replica "agmt="cn=M1_to_M2" (localhost:1389)".
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - Need to create replication keep alive entry <cn=repl keep alive 1,dc=example,dc=com>
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - add dn: cn=repl keep alive 1,dc=example,dc=com
objectclass: top
objectclass: ldapsubentry
objectclass: extensibleObject
cn: repl keep alive 1
[10/Mar/2016:02:24:06 -0500] NSMMReplicationPlugin - Finished total update of replica "agmt="cn=M1_to_M2" (localhost:1389)". Sent 10 entries.


With Replication logs::
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Replica was successfully acquired.
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - Beginning total update of replica "agmt="cn=M1_to_M2" (localhost:1389)".
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 1,dc=example,dc=com> already exists
[10/Mar/2016:02:27:20 -0500] - repl5_tot_waitfor_async_results: 0 15
[10/Mar/2016:02:27:21 -0500] - repl5_tot_waitfor_async_results: 15 15
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Successfully released consumer
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Beginning linger on the connection
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - Finished total update of replica "agmt="cn=M1_to_M2" (localhost:1389)". Sent 11 entries.
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Cancelling linger on the connection
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Disconnected from the consumer
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): State: start -> ready_to_acquire_replica
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Trying non-secure slapi_ldap_init_ext
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): binddn = cn=replication manager,cn=config,  passwd = {DES}2mO8LgMr/w5ognn2BRK+gQ==
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): No linger to cancel on the connection
[10/Mar/2016:02:27:23 -0500] - _csngen_adjust_local_time: gen state before 56e121d70001:1457594839:0:0
[10/Mar/2016:02:27:23 -0500] - _csngen_adjust_local_time: gen state after 56e121db0000:1457594843:0:0
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Replica was successfully acquired.
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): State: ready_to_acquire_replica -> sending_updates
[10/Mar/2016:02:27:23 -0500] - csngen_adjust_time: gen state before 56e121db0001:1457594843:0:0
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - changelog program - _cl5GetDBFile: found DB object 7f425c015c60 for database /var/lib/dirsrv/slapd-M1/changelogdb/cb96b982-e69011e5-b456b189-0a306cbb_56e120a6000000010000.db4
[10/Mar/2016:02:27:23 -0500] - _cl5PositionCursorForReplay (agmt="cn=M1_to_M2" (localhost:1389)): Consumer RUV:
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replicageneration} 56e120a6000000010000
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replica 2 ldap://ipaqa64vml.idmqe.lab.eng.bos.redhat.com:1389}
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replica 1 ldap://ipaqa64vml.idmqe.lab.eng.bos.redhat.com:389} 56e12112000100010000 56e1216d000000010000 00000000
[10/Mar/2016:02:27:23 -0500] - _cl5PositionCursorForReplay (agmt="cn=M1_to_M2" (localhost:1389)): Supplier RUV:
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replicageneration} 56e120a6000000010000

Hence marking VERIFIED.

Comment 7 errata-xmlrpc 2016-03-22 14:47:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0468.html


Note You need to log in before you can comment on or make changes to this bug.