Bug 1314729 - change severity of some messages related to "keep alive" entries
change severity of some messages related to "keep alive" entries
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: 389-ds-base (Show other bugs)
6.7
x86_64 Linux
urgent Severity urgent
: rc
: ---
Assigned To: Noriko Hosoi
Viktor Ashirov
: ZStream
Depends On: 1313258
Blocks:
  Show dependency treegraph
 
Reported: 2016-03-04 05:59 EST by Jaroslav Reznik
Modified: 2016-03-22 10:47 EDT (History)
7 users (show)

See Also:
Fixed In Version: 389-ds-base-1.2.11.15-72.el6_7
Doc Type: Bug Fix
Doc Text:
Keep alive entries are used to prevent skipped updates from being evaluated several times in a fractional replication. If a large number of updates is skipped, these entries can be updated very frequently. Before the keep alive entries are updated, it is tested that they exist. Previously, the test was being logged at a "Fatal" log level, so it was always logged no matter which log level had been set. As a consequence, error logs were filled with unnecessary messages. With this update, the log level for keep alive entry creation has been changed from "Fatal" to "Replication debugging" (8192), and the error log file is no longer filled with "Fatal" messages.
Story Points: ---
Clone Of: 1313258
Environment:
Last Closed: 2016-03-22 10:47:51 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jaroslav Reznik 2016-03-04 05:59:25 EST
This bug has been copied from bug #1313258 and has been proposed
to be backported to 6.7 z-stream (EUS).
Comment 5 Amita Sharma 2016-03-10 02:43:45 EST
[root@ipaqa64vml ~]# rpm -qa | grep 389
389-ds-base-libs-1.2.11.15-72.el6_7.x86_64
389-ds-base-1.2.11.15-72.el6_7.x86_64


Errors logs without replication log level::
[root@ipaqa64vml ~]# tail -f /var/log/dirsrv/slapd-M2/errors
[10/Mar/2016:02:31:20 -0500] - import userRoot: Workers finished; cleaning up...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Workers cleaned up.
[10/Mar/2016:02:31:20 -0500] - import userRoot: Indexing complete.  Post-processing...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Generating numSubordinates complete.
[10/Mar/2016:02:31:20 -0500] - import userRoot: Flushing caches...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Closing files...
[10/Mar/2016:02:31:20 -0500] - import userRoot: Import complete.  Processed 11 entries in 3 seconds. (3.67 entries/sec)
[10/Mar/2016:02:31:21 -0500] NSMMReplicationPlugin - multimaster_be_state_change: replica dc=example,dc=com is coming online; enabling replication

[root@ipaqa64vml yum.repos.d]# tail -f /var/log/dirsrv/slapd-M1/errors
[10/Mar/2016:02:20:08 -0500] - slapd started.  Listening on All Interfaces port 389 for LDAP requests
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - Beginning total update of replica "agmt="cn=M1_to_M2" (localhost:1389)".
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - Need to create replication keep alive entry <cn=repl keep alive 1,dc=example,dc=com>
[10/Mar/2016:02:24:02 -0500] NSMMReplicationPlugin - add dn: cn=repl keep alive 1,dc=example,dc=com
objectclass: top
objectclass: ldapsubentry
objectclass: extensibleObject
cn: repl keep alive 1
[10/Mar/2016:02:24:06 -0500] NSMMReplicationPlugin - Finished total update of replica "agmt="cn=M1_to_M2" (localhost:1389)". Sent 10 entries.


With Replication logs::
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Replica was successfully acquired.
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - Beginning total update of replica "agmt="cn=M1_to_M2" (localhost:1389)".
[10/Mar/2016:02:27:20 -0500] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 1,dc=example,dc=com> already exists
[10/Mar/2016:02:27:20 -0500] - repl5_tot_waitfor_async_results: 0 15
[10/Mar/2016:02:27:21 -0500] - repl5_tot_waitfor_async_results: 15 15
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Successfully released consumer
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Beginning linger on the connection
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - Finished total update of replica "agmt="cn=M1_to_M2" (localhost:1389)". Sent 11 entries.
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Cancelling linger on the connection
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Disconnected from the consumer
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): State: start -> ready_to_acquire_replica
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Trying non-secure slapi_ldap_init_ext
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): binddn = cn=replication manager,cn=config,  passwd = {DES}2mO8LgMr/w5ognn2BRK+gQ==
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): No linger to cancel on the connection
[10/Mar/2016:02:27:23 -0500] - _csngen_adjust_local_time: gen state before 56e121d70001:1457594839:0:0
[10/Mar/2016:02:27:23 -0500] - _csngen_adjust_local_time: gen state after 56e121db0000:1457594843:0:0
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): Replica was successfully acquired.
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): State: ready_to_acquire_replica -> sending_updates
[10/Mar/2016:02:27:23 -0500] - csngen_adjust_time: gen state before 56e121db0001:1457594843:0:0
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - changelog program - _cl5GetDBFile: found DB object 7f425c015c60 for database /var/lib/dirsrv/slapd-M1/changelogdb/cb96b982-e69011e5-b456b189-0a306cbb_56e120a6000000010000.db4
[10/Mar/2016:02:27:23 -0500] - _cl5PositionCursorForReplay (agmt="cn=M1_to_M2" (localhost:1389)): Consumer RUV:
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replicageneration} 56e120a6000000010000
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replica 2 ldap://ipaqa64vml.idmqe.lab.eng.bos.redhat.com:1389}
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replica 1 ldap://ipaqa64vml.idmqe.lab.eng.bos.redhat.com:389} 56e12112000100010000 56e1216d000000010000 00000000
[10/Mar/2016:02:27:23 -0500] - _cl5PositionCursorForReplay (agmt="cn=M1_to_M2" (localhost:1389)): Supplier RUV:
[10/Mar/2016:02:27:23 -0500] NSMMReplicationPlugin - agmt="cn=M1_to_M2" (localhost:1389): {replicageneration} 56e120a6000000010000

Hence marking VERIFIED.
Comment 7 errata-xmlrpc 2016-03-22 10:47:51 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0468.html

Note You need to log in before you can comment on or make changes to this bug.