Bug 1533385 - [DR] - Fail back clean the entire setup instead of cleaning only the storage domains mapped in the mapping var file
Summary: [DR] - Fail back clean the entire setup instead of cleaning only the storage ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-ansible-collection
Classification: oVirt
Component: disaster-recovery
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.2.2
: ---
Assignee: Maor
QA Contact: Elad
URL:
Whiteboard: DR
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-11 08:53 UTC by Maor
Modified: 2018-05-10 06:23 UTC (History)
5 users (show)

Fixed In Version: ovirt-ansible-disaster-recovery-0.2
Doc Type: Bug Fix
Doc Text:
Cause: Failback is applied on the entire setup and clean all the storage domains Consequence: fail back will clean the entire setup, even storage domains which are not mapped in the var file Fix: clean up should be done only on mapped storage domains Result: Failback will remove the mapped active storage domains. Additional fix should be performed on storage domains in maintenance.
Clone Of:
Environment:
Last Closed: 2018-05-10 06:23:10 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.2+
ylavi: blocker+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github https://github.com/oVirt ovirt-ansible-disaster-recovery pull 15 0 None None None 2018-02-09 01:01:24 UTC

Description Maor 2018-01-11 08:53:43 UTC
Description of problem:
We should support cleanup only for a specific storage domains instead of cleaning  the entire secondary setup.
This specific clean of a storage domain is necessary for testing purposes.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Call fail back on a setup containing multiple storage domains although only one storage domain is documented in the mapping var file
2.
3.

Actual results:
All storage domains will be detached

Expected results:
The process should go over all the mapped storage domains, detach and remove  them.

Additional info:
As part of the fail back scenario we shutdown all VMs for the storage domains to be detached successfully.
In case of only specific storage domain the solution will shutdown all the VMs related to the data center only

Comment 1 Allon Mureinik 2018-02-08 10:30:15 UTC
Maor, what's going on with this BZ?
It's on POST, but there's no reference to a patch. Is it in the works?

Comment 2 Maor 2018-02-08 14:19:04 UTC
(In reply to Allon Mureinik from comment #1)
> Maor, what's going on with this BZ?
> It's on POST, but there's no reference to a patch. Is it in the works?

Yes, I still work on that, it is published here https://github.com/oVirt/ovirt-ansible-disaster-recovery/tree/BZ1533385
but haven't published as a pull request yet, I will update the PR today and update it also in the bug

Comment 3 Allon Mureinik 2018-02-27 14:19:38 UTC
Maor, please update the target release too.

Comment 4 Maor 2018-03-01 13:27:37 UTC
(In reply to Allon Mureinik from comment #3)
> Maor, please update the target release too.

Since ovirt-ansible-roles is a metapackage which requires other RPMs the 
package isn't bumped.
After advised with Ondra I will update the "Fixed in version" field.

Comment 5 Elad 2018-05-09 17:50:19 UTC
Fail back removes only the mapped storage domains that exist in the var file and not the entire setup.
Please note that for this behavior to take place, all the domains in the target site for fail back have to be in status active.
Domains that are in maintenance for example, will go through cleanup even if they are not written in the var file.
A separate bz for this issue - bug 1576553


Used:
ovirt-ansible-disaster-recovery-0.4-1.el7ev.noarch
ansible-2.5.2-1.el7ae.noarch

Comment 6 Sandro Bonazzola 2018-05-10 06:23:10 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.