Bug 1315181

Summary: change severity of some messages related to "keep alive" entries
Product: Red Hat Enterprise Linux 7 Reporter: Jan Kurik <jkurik>
Component: 389-ds-baseAssignee: Noriko Hosoi <nhosoi>
Status: CLOSED ERRATA QA Contact: Viktor Ashirov <vashirov>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.2CC: ekeck, gparente, lkuprova, nhosoi, nkinder, rmeggins, sramling, tbordaz, vashirov
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: 389-ds-base-1.3.4.0-28.el7_2 Doc Type: Bug Fix
Doc Text:
Keep alive entries are used to prevent skipped updates from being evaluated several times in a fractional replication. If a large number of updates is skipped, these entries can be updated very frequently. Before the keep alive entries are updated, it is tested that they exist. Previously, the test was being logged at a "Fatal" log level, so it was always logged no matter which log level had been set. As a consequence, error logs were filled with unnecessary messages. With this update, the log level for keep alive entry creation has been changed from "Fatal" to "Replication debugging" (8192), and the error log file is no longer filled with "Fatal" messages.
Story Points: ---
Clone Of: 1314557 Environment:
Last Closed: 2016-03-31 22:04:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1314557    
Bug Blocks:    

Description Jan Kurik 2016-03-07 07:34:18 UTC
This bug has been copied from bug #1314557 and has been proposed
to be backported to 7.2 z-stream (EUS).

Comment 4 Sankar Ramalingam 2016-03-10 13:48:09 UTC
[root@vm-idm-004 MMR_WINSYNC]# grep -i "cn=repl keep" /var/log/dirsrv/slapd-*/errors
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:51 +051800] NSMMReplicationPlugin - Need to create replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com>
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:51 +051800] NSMMReplicationPlugin - add dn: cn=repl keep alive 2211,dc=passsync,dc=com
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:51 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:51 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:51 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:52 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:52 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists
/var/log/dirsrv/slapd-M1/errors:[08/Mar/2016:14:24:52 +051800] NSMMReplicationPlugin - replication keep alive entry <cn=repl keep alive 2211,dc=passsync,dc=com> already exists


The above mentioned lines are coming from 1.3.4.0-27 build.

After upgrading to 1.3.4.0-28, I didn't notice any errors logged related to "cn=repl keep alive" entry

Build tested:
[root@vm-idm-004 MMR_WINSYNC]# rpm -qa |grep -i 389-ds
389-ds-base-devel-1.3.4.0-28.el7_2.x86_64
389-ds-base-debuginfo-1.3.4.0-28.el7_2.x86_64
389-ds-base-libs-1.3.4.0-28.el7_2.x86_64
389-ds-base-1.3.4.0-28.el7_2.x86_6

[root@vm-idm-004 MMR_WINSYNC]# rpm -qi 389-ds-base |grep -i Install
Install Date: Thu 10 Mar 2016 07:24:41 AM IST


Hence, marking the bug as Verified.

Comment 11 errata-xmlrpc 2016-03-31 22:04:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0550.html