RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1404338 - Check IdM Topology for broken record caused by replication conflict before upgrading it
Summary: Check IdM Topology for broken record caused by replication conflict before up...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: ipa
Version: 7.3
Hardware: All
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: IPA Maintainers
QA Contact: Kaleem
Marc Muehlfeld
URL:
Whiteboard:
Depends On: 1398670
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-13 15:56 UTC by Marcel Kolaja
Modified: 2020-03-11 15:30 UTC (History)
11 users (show)

Fixed In Version: ipa-4.4.0-14.el7_3.3
Doc Type: Bug Fix
Doc Text:
Previously, if an Identity Management (IdM) upgrade ran simultaneously on multiple servers, replication conflict entries were sometimes generated in the "cn=topology" subtree. If the domain level was raised while the conflict entries existed, the generated topology segment was sometimes distributed between correct and conflict entries. Also, one-directional segments fail to receive the data. As a consequence, IdM clients and commands fail. A patch has been applied to reject raising the domain level if replication conflicts exists. As a result, topology segments are created now only in a database without conflict entries.
Clone Of: 1398670
Environment:
Last Closed: 2017-01-17 18:23:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0089 0 normal SHIPPED_LIVE ipa bug fix update 2017-01-17 22:55:29 UTC

Description Marcel Kolaja 2016-12-13 15:56:16 UTC
This bug has been copied from bug #1398670 and has been proposed
to be backported to 7.3 z-stream (EUS).

Comment 7 Nikhil Dehadrai 2017-01-05 16:06:53 UTC
IPA server version: ipa-server-4.4.0-14.el7_3.4.x86_64

Tested the bug on the basis of following points:

Steps: (Upgrade from 7.2.z > 7.3.2)
====================================
1) Install master on RHEL 7.2.z. (In my case ipa-server.x86_64 0:4.2.0-15.el7_2.19).
2) Install replica on RHEL 7.2.z against master in step1, with ipa-replica-prepare command.
3) Stop replica server using "ipactl stop".
4) Configure repos for RHEL 7.3.2 on Master and Replica.
5) Upgrade master to RHEL 7.3.2 and stop master using command "ipactl stop".
6) Start replica using command "ipactl start" and Upgrade replica to Rhel 7.3.2 using command "yum -y update 'ipa*' sssd".
7) Start master
 server using command "ipactl start"
8) Run "kinit admin" both on master and replica.
9) Run "ipa domainlevel-set 1" both on Master and Replica.

Observations:
==============
1) Both Master and Replica are upgraded successfully after step5 and step6.
2) After step9, following error message is received both on Master:
#ipa domainlevel-set 1
ipa: ERROR: Domain Level cannot be raised to 1, server <replica.testrelm.test> does not support it.

3) After step9, following error message is received both on REPLICA:
ipa domainlevel-set 1
ipa: ERROR: Major (851968): Unspecified GSS failure.  Minor code may provide more information, Minor (2529639068): Cannot contact any KDC for realm 'TESTRELM.TEST'

Thus on the basis of above observations, marking the status of bug to "ASSIGNED"

Comment 8 thierry bordaz 2017-01-06 10:52:31 UTC
Verification of 1404338 depends on 1410514.

I can imagine a very poor workaround to verify 1404338, but I am not sure it is acceptable and it will work.

1) Install master on RHEL 7.2.z. (In my case ipa-server.x86_64 0:4.2.0-15.el7_2.19).
2) Install replica on RHEL 7.2.z against master in step1, with ipa-replica-prepare command.
   2-1) Configure repos for RHEL 7.3.2 on Master and Replica.

3) stop master using command "ipactl stop"

4) Upgrade replica to Rhel 7.3.2 using command "yum -y update 'ipa*' sssd".

5) Stop replica server using "ipactl stop".

  5-1) edit dse.ldif to disable cos plugin on replica
dn: cn=Class of Service,cn=plugins,cn=config
nsslapd-pluginEnabled: off

dn: cn=Legacy Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove

dn: cn=Multimaster Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove

dn: cn=Retro Changelog Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove


6) start master using command "ipactl start
7) Upgrade master to RHEL 7.3.2 and stop master using command "ipactl stop".

  7-1) edit dse.ldif to disable cos plugin on master
dn: cn=Class of Service,cn=plugins,cn=config
nsslapd-pluginEnabled: off

dn: cn=Legacy Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove

dn: cn=Multimaster Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove

dn: cn=Retro Changelog Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- remove


8) Start replica using command "ipactl start" 
9) Start master server using command "ipactl start"
10) wait few minutes for replication to occur

11) Stop replica server using "ipactl stop".

  11-1) edit dse.ldif to enable cos plugin on replica
dn: cn=Class of Service,cn=plugins,cn=config
nsslapd-pluginEnabled: on

dn: cn=Legacy Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add

dn: cn=Multimaster Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add

dn: cn=Retro Changelog Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add

12) Stop master server using "ipactl stop".

  12-1) edit dse.ldif to enable cos plugin on replica
dn: cn=Class of Service,cn=plugins,cn=config
nsslapd-pluginEnabled: on

dn: cn=Legacy Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add

dn: cn=Multimaster Replication Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add

dn: cn=Retro Changelog Plugin,cn=plugins,cn=config
nsslapd-plugin-depends-on-named: Class of Service  <-- add



13) Start replica using command "ipactl start" 
14) Start master server using command "ipactl start"
15) Run "kinit admin" both on master and replica.
16) Run "ipa domainlevel-set 1" both on Master and Replica.

Comment 9 Nikhil Dehadrai 2017-01-06 14:33:53 UTC
Hi Thierry,

As per the steps/workaround mentioned in Comment#8, I was able to verify the bug:

ON MASTER (after upgrade to 7.3.2):
=====================================
[root@vm-idm-030 slapd-TESTRELM-TEST]# ipa domainlevel-set 1
ipa: ERROR: Domain Level cannot be raised to 1, existing replication conflicts have to be resolved.
[root@vm-idm-030 slapd-TESTRELM-TEST]# ipa-replica-manage list
vm-idm-030.testrelm.test: master
auto-hv-01-guest01.testrelm.test: master
[root@vm-idm-030 slapd-TESTRELM-TEST]# ipa domainlevel-get
-----------------------
Current domain level: 0
-----------------------

ON REPLICA (after upgrade to 7.3.2):
=====================================
[root@auto-hv-01-guest01 slapd-TESTRELM-TEST]# ipa domainlevel-set 1
ipa: ERROR: Domain Level cannot be raised to 1, existing replication conflicts have to be resolved.
[root@auto-hv-01-guest01 slapd-TESTRELM-TEST]# ipa-replica-manage list
vm-idm-030.testrelm.test: master
auto-hv-01-guest01.testrelm.test: master
[root@auto-hv-01-guest01 slapd-TESTRELM-TEST]# ipa domainlevel-get
-----------------------
Current domain level: 0
-----------------------

Comment 10 Nikhil Dehadrai 2017-01-09 06:17:10 UTC
Thus on basis of steps provided in Comment#8 and respective observations in Comment#9, marking the status of bug to "VERIFIED".

Comment 12 errata-xmlrpc 2017-01-17 18:23:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0089.html


Note You need to log in before you can comment on or make changes to this bug.