Bug 1187524 - Replication agreement with replica not disabled when ipa-restore done without IPA installed
Summary: Replication agreement with replica not disabled when ipa-restore done without...
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: ipa
Version: 7.1
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: David Kupka
QA Contact: Namita Soman
Depends On:
Blocks: 1199060
TreeView+ depends on / blocked
Reported: 2015-01-30 10:54 UTC by Kaleem
Modified: 2015-11-19 12:01 UTC (History)
9 users (show)

Fixed In Version: ipa-4.2.0-0.1.alpha1.el7
Doc Type: Known Issue
Doc Text:
The userRoot.ldif and ipaca.ldif files, from which Identity Management (IdM) reimports the back end when restoring from backup, cannot be opened during a full-server restore even though they are present in the tar archive containing the IdM backup. Consequently, these files are skipped during the full-server restore. If you restore from a full-server backup, the restored back end can receive some updates from after the backup was created. This is not expected because all updates received between the time the backup was created and the time the restore is performed should be lost. The server is successfully restored, but can contain invalid data. If the restored server containing invalid data is then used to reinitialize a replica, the replica reinitialization succeeds, but the data on the replica is invalid. No workaround is currently available. It is recommended that you do not use a server restored from a full-server IdM backup to reinitialize a replica, which ensures that no unexpected updates are present at the end of the restore and reinitialization process. Note that this known issue relates only to the full-server IdM restore, not to the data-only IdM restore.
Clone Of:
: 1199060 (view as bug list)
Last Closed: 2015-11-19 12:01:11 UTC
Target Upstream Version:

Attachments (Terms of Use)
steps with console output (23.69 KB, text/plain)
2015-01-30 10:54 UTC, Kaleem
no flags Details
modified ipa-restore.py+log files (traces added prefixed by 'XXX') (9.81 KB, application/x-gzip)
2015-02-19 15:45 UTC, thierry bordaz
no flags Details
console output with verification steps (26.08 KB, text/plain)
2015-10-08 10:47 UTC, Kaleem
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2362 0 normal SHIPPED_LIVE ipa bug fix and enhancement update 2015-11-19 10:40:46 UTC

Description Kaleem 2015-01-30 10:54:57 UTC
Created attachment 985934 [details]
steps with console output

Description of problem:
Replication agreement with replica not disabled when ipa-restore done without IPA installed and consequently replica is able to push changes to restored master without doing re-init from master which causes data corruption.

Version-Release number of selected component (if applicable):
[root@master ~]# rpm -q ipa-server
[root@master ~]# 

How reproducible:

Steps to Reproduce:
1. Please find the attached file which is having console output along with steps.

Actual results:
Replication agreement with existing replica not disabled when doing IPA restore without IPA installed.

Expected results:
Replication agreement with replica should be disabled when doing IPA restore without IPA installed.

Comment 2 thierry bordaz 2015-01-30 14:52:35 UTC
Looking at the logs was a bad idea. Backup seems to change them and I think we can not trust them.

I am trying to reproduce. Having difficulties after step 11 complete.
The ipa-restore fails with a problem with hostname:

ipa-restore /var/lib/ipa/backup/ipa-full-2015-01-30-09-09-23
Directory Manager (existing master) password: 

Preparing restore from /var/lib/ipa/backup/ipa-full-2015-01-30-09-09-23 on localhost.localdomain
Performing FULL restore from FULL backup
Host name localhost.localdomain does not match backup name vm-028.idm.lab.bos.redhat.com

Note that I installed master/replica without reverse-zone (--forwarder= or specifying --ip-address= But at the end replication was working fine.

Also looking that the userRoot.ldif file in the backup, I noticed it contains the RUV. This is not problematic if the RUV is cleared at restore time

Comment 3 thierry bordaz 2015-01-30 18:51:10 UTC
Using Kaleem environment I was able to reproduce the problem systematically.

I modified the backup:
   - remove the RUV from the userRoot.ldif
   - remove the RUV from ./etc/dirsrv/slapd-TESTRELM-TEST/dse.ldif (in files.tar)

These were the only 2 ldif files containing the replicageneration.
So either there is an other ldif  file in the backup, that contains the RUV and I missed that file. Or the ldif files were not used at the end of the restore.
A possibility is that backup database files ('*.db') were applied after the ldif import.
I wanted to check that but errors logs are also modified by ipa-restore .

Comment 4 Petr Vobornik 2015-02-09 12:35:19 UTC
Upstream ticket:

Comment 5 Martin Kosek 2015-02-10 12:53:02 UTC
Waiting in finished investigation from Thierry, to find the root cause.

Comment 6 thierry bordaz 2015-02-11 15:49:35 UTC
During the restore, the database files are restored and then the backends are reimported from ldif files (ipaca.ldif and userroot.ldif).

The ldif files are imported at the condition the ldif files exists.
For a FULL restore, the ldif files are temporary files (e.g. /tmp/tmpEoVEO5ipa/ipa/EXAMPLE-COM-userRoot.ldif). The test (os.path.exist) on those temporary files fails. This is the reason why they are not imported (ldif2db).

I will test a fix to allow the import. I assume that if the import (without RUV) is successful then the others replica will not be able to replicate to the restored instance.

Comment 7 Martin Kosek 2015-02-12 08:37:31 UTC
Thierry, should we document this as Known Issue in 7.1? Do we know the exact root cause?

Comment 8 thierry bordaz 2015-02-13 17:05:19 UTC
Still fighting with  fix.
A first problem is that using a temporary directory to extract the backup, The test os.path.exists on  extracted ldif files return FALSE . 
I had to use tarinfo to check the existence of the file and add then to the files set to import.

It works but then we want to remove the RUV from the files. Then it fails to open some of the files. It is not systematically the same file that fails so I believe syncing the file system should made it, but I do not know how to do that.

Currently FULL restore is not working as expect, we can document that until we get a true fix.

Comment 9 Jan Cholasta 2015-02-17 08:47:00 UTC
That's weird. Did you check the audit log for SELinux AVCs?

I don't think syncing the FS will help, as it merely flushes FS buffers. If the file isn't there in the first place, there is nothing to flush (?)

You can try adding "print repr(filename)" above the os.path.exists call to see if there are any unusual characters in the filename.

Comment 11 thierry bordaz 2015-02-19 15:37:48 UTC
RUV removal works and should address the problem of restored server receiving old updates.

The problem is understood but due to lack of knowledge of python I am not able to find a fix.
The restore is done using a tarball. The tarball contains backends ldif.
Files (including backends ldif) are extracted from the tarball, but are not accessible. os.path.exists or open fails on those files.
I do not know the reason why those call fail (no such file) although I can detect the files in the tarball.

I think removing the RUV from the ldif is the best approach, this is why I prefer to make it work. 

I am attaching the trace of the debug.

Comment 12 Martin Kosek 2015-02-19 15:42:10 UTC
Jan, can you please assist Thierry with the Python parts?

Comment 13 thierry bordaz 2015-02-19 15:45:41 UTC
Created attachment 993684 [details]
modified ipa-restore.py+log files (traces added prefixed by 'XXX')

Comment 29 Kaleem 2015-10-08 10:46:01 UTC

IPA Version:
[root@dhcp207-229 ~]# rpm -q ipa-server
[root@dhcp207-229 ~]# 

Please find the attached file for console output of verification steps.

Comment 30 Kaleem 2015-10-08 10:47:06 UTC
Created attachment 1080969 [details]
console output with verification steps

Comment 31 errata-xmlrpc 2015-11-19 12:01:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.